Authentication and authorization


OpenShift Container Platform 4.10

Configuring user authentication and access controls for users and services

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for defining identity providers in OpenShift Container Platform. It also discusses how to configure role-based access control to secure the cluster.

Chapter 1. Overview of authentication and authorization

1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization

This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization.

authentication
An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster.
authorization
Authorization determines whether the identified user has permissions to perform the requested action.
bearer token
Bearer token is used to authenticate to API with the header Authorization: Bearer <token>.
Cloud Credential Operator
The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs).
config map
A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap. Applications running in a pod can use this data.
containers
Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host.
Custom Resource (CR)
A CR is an extension of the Kubernetes API.
group
A group is a set of users. A group is useful for granting permissions to multiple users one time.
HTPasswd
HTPasswd updates the files that store usernames and password for authentication of HTTP users.
Keystone
Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services.
Lightweight directory access protocol (LDAP)
LDAP is a protocol that queries user information.
manual mode
In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO).
mint mode
Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required.
namespace
A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources.
node
A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine.
OAuth client
OAuth client is used to get a bearer token.
OAuth server
The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user’s identity from the configured identity provider and creates an access token.
OpenID Connect
The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers.
passthrough mode
In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials.
pod
A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node.
regular users
Users that are created automatically in the cluster upon first login or via the API.
request header
A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request.
role-based access control (RBAC)
A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles.
service accounts
Service accounts are used by the cluster components or applications.
system users
Users that are created automatically when the cluster is installed.
users
Users is an entity that can make requests to API.

1.2. About authentication in OpenShift Container Platform

To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster.

To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API.

Note

If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error.

An administrator can configure authentication through the following tasks:

1.3. About authorization in OpenShift Container Platform

Authorization involves determining whether the identified user has permissions to perform the requested action.

Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings. To understand how authorization works in OpenShift Container Platform, see Evaluating authorization.

You can also control access to an OpenShift Container Platform cluster through projects and namespaces.

Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs).

You can manage authorization for OpenShift Container Platform through the following tasks:

Chapter 2. Understanding authentication

For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed.

As an administrator, you can configure authentication for OpenShift Container Platform.

2.1. Users

A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform.

Several types of users can exist:

User typeDescription

Regular users

This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice

System users

Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com

Service accounts

These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder

Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do.

2.2. Groups

A user can be assigned to one or more groups, each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually.

In addition to explicitly defined groups, there are also system groups, or virtual groups, that are automatically provisioned by the cluster.

The following default virtual groups are most important:

Virtual groupDescription

system:authenticated

Automatically associated with all authenticated users.

system:authenticated:oauth

Automatically associated with all users authenticated with an OAuth access token.

system:unauthenticated

Automatically associated with all unauthenticated users.

2.3. API authentication

Requests to the OpenShift Container Platform API are authenticated using the following methods:

OAuth access tokens
  • Obtained from the OpenShift Container Platform OAuth server using the <namespace_route>/oauth/authorize and <namespace_route>/oauth/token endpoints.
  • Sent as an Authorization: Bearer…​ header.
  • Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests.
X.509 client certificates
  • Requires an HTTPS connection to the API server.
  • Verified by the API server against a trusted certificate authority bundle.
  • The API server creates and distributes certificates to controllers to authenticate themselves.

Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error.

If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make.

2.3.1. OpenShift Container Platform OAuth server

The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API.

When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request.

It then determines what user that identity maps to, creates an access token for that user, and returns the token for use.

2.3.1.1. OAuth token requests

Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API:

OAuth clientUsage

openshift-browser-client

Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1]

openshift-challenging-client

Requests tokens with a user-agent that can handle WWW-Authenticate challenges.

  1. <namespace_route> refers to the namespace route. This is found by running the following command:

    $ oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host

All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize. Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows.

If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow.

Note

To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value.

If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request.

2.3.1.2. API impersonation

You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation.

2.3.1.3. Authentication metrics for Prometheus

OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts:

  • openshift_auth_basic_password_count counts the number of oc login user name and password attempts.
  • openshift_auth_basic_password_count_result counts the number of oc login user name and password attempts by result, success or error.
  • openshift_auth_form_password_count counts the number of web console login attempts.
  • openshift_auth_form_password_count_result counts the number of web console login attempts by result, success or error.
  • openshift_auth_password_total counts the total number of oc login and web console login attempts.

Chapter 3. Configuring the internal OAuth server

3.1. OpenShift Container Platform OAuth server

The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API.

When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request.

It then determines what user that identity maps to, creates an access token for that user, and returns the token for use.

3.2. OAuth token request flows and responses

The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows.

When requesting an OAuth token using the implicit grant flow (response_type=token) with a client_id configured to request WWW-Authenticate challenges (like openshift-challenging-client), these are the possible server responses from /oauth/authorize, and how they should be handled:

StatusContentClient response

302

Location header containing an access_token parameter in the URL fragment (RFC 6749 section 4.2.2)

Use the access_token value as the OAuth token.

302

Location header containing an error query parameter (RFC 6749 section 4.1.2.1)

Fail, optionally surfacing the error (and optional error_description) query values to the user.

302

Other Location header

Follow the redirect, and process the result using these rules.

401

WWW-Authenticate header present

Respond to challenge if type is recognized (e.g. Basic, Negotiate, etc), resubmit request, and process the result using these rules.

401

WWW-Authenticate header missing

No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token).

Other

Other

Fail, optionally surfacing response body to the user.

3.3. Options for the internal OAuth server

Several configuration options are available for the internal OAuth server.

3.3.1. OAuth token duration options

The internal OAuth server generates two kinds of tokens:

TokenDescription

Access tokens

Longer-lived tokens that grant access to the API.

Authorize codes

Short-lived tokens whose only use is to be exchanged for an access token.

You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient object definition.

3.3.2. OAuth grant options

When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client’s grant strategy.

The OAuth client requesting token must provide its own grant strategy.

You can apply the following default methods:

Grant optionDescription

auto

Auto-approve the grant and retry the request.

prompt

Prompt the user to approve or deny the grant.

3.4. Configuring the internal OAuth server’s token duration

You can configure default options for the internal OAuth server’s token duration.

Important

By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses.

If the default time is insufficient, then this can be modified using the following procedure.

Procedure

  1. Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default.

    apiVersion: config.openshift.io/v1
    kind: OAuth
    metadata:
      name: cluster
    spec:
      tokenConfig:
        accessTokenMaxAgeSeconds: 172800 1
    1
    Set accessTokenMaxAgeSeconds to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used.
  2. Apply the new configuration file:

    Note

    Because you update the existing OAuth server, you must use the oc apply command to apply the change.

    $ oc apply -f </path/to/file.yaml>
  3. Confirm that the changes are in effect:

    $ oc describe oauth.config.openshift.io/cluster

    Example output

    ...
    Spec:
      Token Config:
        Access Token Max Age Seconds:  172800
    ...

3.5. Configuring token inactivity timeout for the internal OAuth server

You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set.

Note

If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have configured an identity provider (IDP).

Procedure

  1. Update the OAuth configuration to set a token inactivity timeout.

    1. Edit the OAuth object:

      $ oc edit oauth cluster

      Add the spec.tokenConfig.accessTokenInactivityTimeout field and set your timeout value:

      apiVersion: config.openshift.io/v1
      kind: OAuth
      metadata:
      ...
      spec:
        tokenConfig:
          accessTokenInactivityTimeout: 400s 1
      1
      Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for 30 minutes. The minimum allowed timeout value is 300s.
    2. Save the file to apply the changes.
  2. Check that the OAuth server pods have restarted:

    $ oc get clusteroperators authentication

    Do not continue to the next step until PROGRESSING is listed as False, as shown in the following output:

    Example output

    NAME             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication   4.10.0    True        False         False      145m

  3. Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes.

    $ oc get clusteroperators kube-apiserver

    Do not continue to the next step until PROGRESSING is listed as False, as shown in the following output:

    Example output

    NAME             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    kube-apiserver   4.10.0     True        False         False      145m

    If PROGRESSING is showing True, wait a few minutes and try again.

Verification

  1. Log in to the cluster with an identity from your IDP.
  2. Execute a command and verify that it was successful.
  3. Wait longer than the configured timeout without using the identity. In this procedure’s example, wait longer than 400 seconds.
  4. Try to execute a command from the same identity’s session.

    This command should fail because the token should have expired due to inactivity longer than the configured timeout.

    Example output

    error: You must be logged in to the server (Unauthorized)

3.6. Customizing the internal OAuth server URL

You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration.

Warning

If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example:

$ oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1
1
For self-signed certificates, the ca.crt file must contain the custom CA certificate, otherwise the login will not succeed.

The Cluster Authentication Operator publishes the OAuth server’s serving certificate in the oauth-serving-cert config map in the openshift-config-managed namespace. You can find the certificate in the data.ca-bundle.crt key of the config map.

Prerequisites

  • You have logged in to the cluster as a user with administrative privileges.
  • You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches.

    Tip

    You can create a TLS secret by using the oc create secret tls command.

Procedure

  1. Edit the cluster Ingress configuration:

    $ oc edit ingress.config.openshift.io cluster
  2. Set the custom hostname and optionally the serving certificate and key:

    apiVersion: config.openshift.io/v1
    kind: Ingress
    metadata:
      name: cluster
    spec:
      componentRoutes:
        - name: oauth-openshift
          namespace: openshift-authentication
          hostname: <custom_hostname> 1
          servingCertKeyPairSecret:
            name: <secret_name> 2
    1
    The custom hostname.
    2
    Reference to a secret in the openshift-config namespace that contains a TLS certificate (tls.crt) and key (tls.key). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches.
  3. Save the file to apply the changes.

3.7. OAuth server metadata

Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route> is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification.

Thus, any application running inside the cluster can issue a GET request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information:

{
  "issuer": "https://<namespace_route>", 1
  "authorization_endpoint": "https://<namespace_route>/oauth/authorize", 2
  "token_endpoint": "https://<namespace_route>/oauth/token", 3
  "scopes_supported": [ 4
    "user:full",
    "user:info",
    "user:check-access",
    "user:list-scoped-projects",
    "user:list-projects"
  ],
  "response_types_supported": [ 5
    "code",
    "token"
  ],
  "grant_types_supported": [ 6
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [ 7
    "plain",
    "S256"
  ]
}
1
The authorization server’s issuer identifier, which is a URL that uses the https scheme and has no query or fragment components. This is the location where .well-known RFC 5785 resources containing information about the authorization server are published.
2
URL of the authorization server’s authorization endpoint. See RFC 6749.
3
URL of the authorization server’s token endpoint. See RFC 6749.
4
JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised.
5
JSON array containing a list of the OAuth 2.0 response_type values that this authorization server supports. The array values used are the same as those used with the response_types parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591.
6
JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the grant_types parameter defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591.
7
JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the code_challenge_method parameter defined in Section 4.3 of RFC 7636. The valid code challenge method values are those registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters.

3.8. Troubleshooting OAuth API events

In some cases the API server returns an unexpected condition error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server’s state.

A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition server error during OAuth, run oc get events to view these events under ServiceAccount.

The following example warns of a service account that is missing a proper OAuth redirect URI:

$ oc get events | grep ServiceAccount

Example output

1m         1m          1         proxy                    ServiceAccount                                  Warning   NoSAOAuthRedirectURIs   service-account-oauth-client-getter   system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>

Running oc describe sa/<service_account_name> reports any OAuth events associated with the given service account name.

$ oc describe sa/proxy | grep -A5 Events

Example output

Events:
  FirstSeen     LastSeen        Count   From                                    SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                                    -------------   --------        ------                  -------
  3m            3m              1       service-account-oauth-client-getter                     Warning         NoSAOAuthRedirectURIs   system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>

The following is a list of the possible event errors:

No redirect URI annotations or an invalid URI is specified

Reason                  Message
NoSAOAuthRedirectURIs   system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>

Invalid route specified

Reason                  Message
NoSAOAuthRedirectURIs   [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]

Invalid reference type specified

Reason                  Message
NoSAOAuthRedirectURIs   [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]

Missing SA tokens

Reason                  Message
NoSAOAuthTokens         system:serviceaccount:myproject:proxy has no tokens

Chapter 4. Configuring OAuth clients

Several OAuth clients are created by default in OpenShift Container Platform. You can also register and configure additional OAuth clients.

4.1. Default OAuth clients

The following OAuth clients are automatically created when starting the OpenShift Container Platform API:

OAuth clientUsage

openshift-browser-client

Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1]

openshift-challenging-client

Requests tokens with a user-agent that can handle WWW-Authenticate challenges.

  1. <namespace_route> refers to the namespace route. This is found by running the following command:

    $ oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host

4.2. Registering an additional OAuth client

If you need an additional OAuth client to manage authentication for your OpenShift Container Platform cluster, you can register one.

Procedure

  • To register additional OAuth clients:

    $ oc create -f <(echo '
    kind: OAuthClient
    apiVersion: oauth.openshift.io/v1
    metadata:
     name: demo 1
    secret: "..." 2
    redirectURIs:
     - "http://www.example.com/" 3
    grantMethod: prompt 4
    ')
    1
    The name of the OAuth client is used as the client_id parameter when making requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token.
    2
    The secret is used as the client_secret parameter when making requests to <namespace_route>/oauth/token.
    3
    The redirect_uri parameter specified in requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token must be equal to or prefixed by one of the URIs listed in the redirectURIs parameter value.
    4
    The grantMethod is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Specify auto to automatically approve the grant and retry the request, or prompt to prompt the user to approve or deny the grant.

4.3. Configuring token inactivity timeout for an OAuth client

You can configure OAuth clients to expire OAuth tokens after a set period of inactivity. By default, no token inactivity timeout is set.

Note

If the token inactivity timeout is also configured in the internal OAuth server configuration, the timeout that is set in the OAuth client overrides that value.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have configured an identity provider (IDP).

Procedure

  • Update the OAuthClient configuration to set a token inactivity timeout.

    1. Edit the OAuthClient object:

      $ oc edit oauthclient <oauth_client> 1
      1
      Replace <oauth_client> with the OAuth client to configure, for example, console.

      Add the accessTokenInactivityTimeoutSeconds field and set your timeout value:

      apiVersion: oauth.openshift.io/v1
      grantMethod: auto
      kind: OAuthClient
      metadata:
      ...
      accessTokenInactivityTimeoutSeconds: 600 1
      1
      The minimum allowed timeout value in seconds is 300.
    2. Save the file to apply the changes.

Verification

  1. Log in to the cluster with an identity from your IDP. Be sure to use the OAuth client that you just configured.
  2. Perform an action and verify that it was successful.
  3. Wait longer than the configured timeout without using the identity. In this procedure’s example, wait longer than 600 seconds.
  4. Try to perform an action from the same identity’s session.

    This attempt should fail because the token should have expired due to inactivity longer than the configured timeout.

4.4. Additional resources

Chapter 5. Managing user-owned OAuth access tokens

Users can review their own OAuth access tokens and delete any that are no longer needed.

5.1. Listing user-owned OAuth access tokens

You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in.

Procedure

  • List all user-owned OAuth access tokens:

    $ oc get useroauthaccesstokens

    Example output

    NAME       CLIENT NAME                    CREATED                EXPIRES                         REDIRECT URI                                                       SCOPES
    <token1>   openshift-challenging-client   2021-01-11T19:25:35Z   2021-01-12 19:25:35 +0000 UTC   https://oauth-openshift.apps.example.com/oauth/token/implicit      user:full
    <token2>   openshift-browser-client       2021-01-11T19:27:06Z   2021-01-12 19:27:06 +0000 UTC   https://oauth-openshift.apps.example.com/oauth/token/display       user:full
    <token3>   console                        2021-01-11T19:26:29Z   2021-01-12 19:26:29 +0000 UTC   https://console-openshift-console.apps.example.com/auth/callback   user:full

  • List user-owned OAuth access tokens for a particular OAuth client:

    $ oc get useroauthaccesstokens --field-selector=clientName="console"

    Example output

    NAME       CLIENT NAME                    CREATED                EXPIRES                         REDIRECT URI                                                       SCOPES
    <token3>   console                        2021-01-11T19:26:29Z   2021-01-12 19:26:29 +0000 UTC   https://console-openshift-console.apps.example.com/auth/callback   user:full

5.2. Viewing the details of a user-owned OAuth access token

You can view the details of a user-owned OAuth access token.

Procedure

  • Describe the details of a user-owned OAuth access token:

    $ oc describe useroauthaccesstokens <token_name>

    Example output

    Name:                        <token_name> 1
    Namespace:
    Labels:                      <none>
    Annotations:                 <none>
    API Version:                 oauth.openshift.io/v1
    Authorize Token:             sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8
    Client Name:                 openshift-browser-client 2
    Expires In:                  86400 3
    Inactivity Timeout Seconds:  317 4
    Kind:                        UserOAuthAccessToken
    Metadata:
      Creation Timestamp:  2021-01-11T19:27:06Z
      Managed Fields:
        API Version:  oauth.openshift.io/v1
        Fields Type:  FieldsV1
        fieldsV1:
          f:authorizeToken:
          f:clientName:
          f:expiresIn:
          f:redirectURI:
          f:scopes:
          f:userName:
          f:userUID:
        Manager:         oauth-server
        Operation:       Update
        Time:            2021-01-11T19:27:06Z
      Resource Version:  30535
      Self Link:         /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name>
      UID:               f9d00b67-ab65-489b-8080-e427fa3c6181
    Redirect URI:        https://oauth-openshift.apps.example.com/oauth/token/display
    Scopes:
      user:full 5
    User Name:  <user_name> 6
    User UID:   82356ab0-95f9-4fb3-9bc0-10f1d6a6a345
    Events:     <none>

    1
    The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in.
    2
    The client name, which describes where the token originated from.
    3
    The value in seconds from the creation time before this token expires.
    4
    If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used.
    5
    The scopes for this token.
    6
    The user name associated with this token.

5.3. Deleting user-owned OAuth access tokens

The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed.

Deleting an OAuth access token logs out the user from all sessions that use the token.

Procedure

  • Delete the user-owned OAuth access token:

    $ oc delete useroauthaccesstokens <token_name>

    Example output

    useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted

Chapter 6. Understanding identity provider configuration

The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.

As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.

6.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

6.2. Supported identity providers

You can configure the following types of identity providers:

Identity providerDescription

htpasswd

Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd.

Keystone

Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database.

LDAP

Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication.

Basic authentication

Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism.

Request header

Configure a request-header identity provider to identify users from request header values, such as X-Remote-User. It is typically used in combination with an authenticating proxy, which sets the request header value.

GitHub or GitHub Enterprise

Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise’s OAuth authentication server.

GitLab

Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider.

Google

Configure a google identity provider using Google’s OpenID Connect integration.

OpenID Connect

Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow.

Once an identity provider has been defined, you can use RBAC to define and apply permissions.

6.3. Removing the kubeadmin user

After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security.

Warning

If you follow this procedure before another user is a cluster-admin, then OpenShift Container Platform must be reinstalled. It is not possible to undo this command.

Prerequisites

  • You must have configured at least one identity provider.
  • You must have added the cluster-admin role to a user.
  • You must be logged in as an administrator.

Procedure

  • Remove the kubeadmin secrets:

    $ oc delete secrets kubeadmin -n kube-system

6.4. Identity provider parameters

The following parameters are common to all identity providers:

ParameterDescription

name

The provider name is prefixed to provider user names to form an identity name.

mappingMethod

Defines how new identities are mapped to users when they log in. Enter one of the following values:

claim
The default value. Provisions a user with the identity’s preferred user name. Fails if a user with that user name is already mapped to another identity.
lookup
Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users.
generate
Provisions a user with the identity’s preferred user name. If a user with the preferred user name is already mapped to an existing identity, a unique user name is generated. For example, myuser2. This method should not be used in combination with external processes that require exact matches between OpenShift Container Platform user names and identity provider user names, such as LDAP group sync.
add
Provisions a user with the identity’s preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names.
Note

When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add.

6.5. Sample identity provider CR

The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider.

Sample identity provider CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: my_identity_provider 1
    mappingMethod: claim 2
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret 3

1
This provider name is prefixed to provider user names to form an identity name.
2
Controls how mappings are established between this provider’s identities and User objects.
3
An existing secret containing a file generated using htpasswd.

Chapter 7. Configuring identity providers

7.1. Configuring an htpasswd identity provider

Configure the htpasswd identity provider to allow users to log in to OpenShift Container Platform with credentials from an htpasswd file.

To define an htpasswd identity provider, perform the following tasks:

  1. Create an htpasswd file to store the user and password information.
  2. Create a secret to represent the htpasswd file.
  3. Define an htpasswd identity provider resource that references the secret.
  4. Apply the resource to the default OAuth configuration to add the identity provider.

7.1.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.1.2. About htpasswd authentication

Using htpasswd authentication in OpenShift Container Platform allows you to identify users based on an htpasswd file. An htpasswd file is a flat file that contains the user name and hashed password for each user. You can use the htpasswd utility to create this file.

7.1.3. Creating the htpasswd file

See one of the following sections for instructions about how to create the htpasswd file:

7.1.3.1. Creating an htpasswd file using Linux

To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd.

Prerequisites

  • Have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing the httpd-tools package.

Procedure

  1. Create or update your flat file with a user name and hashed password:

    $ htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>

    The command generates a hashed version of the password.

    For example:

    $ htpasswd -c -B -b users.htpasswd <username> <password>

    Example output

    Adding password for user user1

  2. Continue to add or update credentials to the file:

    $ htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>
7.1.3.2. Creating an htpasswd file using Windows

To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd.

Prerequisites

  • Have access to htpasswd.exe. This file is included in the \bin directory of many Apache httpd distributions.

Procedure

  1. Create or update your flat file with a user name and hashed password:

    > htpasswd.exe -c -B -b <\path\to\users.htpasswd> <username> <password>

    The command generates a hashed version of the password.

    For example:

    > htpasswd.exe -c -B -b users.htpasswd <username> <password>

    Example output

    Adding password for user user1

  2. Continue to add or update credentials to the file:

    > htpasswd.exe -b <\path\to\users.htpasswd> <username> <password>

7.1.4. Creating the htpasswd secret

To use the htpasswd identity provider, you must define a secret that contains the htpasswd user file.

Prerequisites

  • Create an htpasswd file.

Procedure

  • Create a Secret object that contains the htpasswd users file:

    $ oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1
    1
    The secret key containing the users file for the --from-file argument must be named htpasswd, as shown in the above command.
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: htpass-secret
      namespace: openshift-config
    type: Opaque
    data:
      htpasswd: <base64_encoded_htpasswd_file_contents>

7.1.5. Sample htpasswd CR

The following custom resource (CR) shows the parameters and acceptable values for an htpasswd identity provider.

htpasswd CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: my_htpasswd_provider 1
    mappingMethod: claim 2
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret 3

1
This provider name is prefixed to provider user names to form an identity name.
2
Controls how mappings are established between this provider’s identities and User objects.
3
An existing secret containing a file generated using htpasswd.

Additional resources

7.1.6. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Log in to the cluster as a user from your identity provider, entering the password when prompted.

    $ oc login -u <username>
  3. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.1.7. Updating users for an htpasswd identity provider

You can add or remove users from an existing htpasswd identity provider.

Prerequisites

  • You have created a Secret object that contains the htpasswd user file. This procedure assumes that it is named htpass-secret.
  • You have configured an htpasswd identity provider. This procedure assumes that it is named my_htpasswd_provider.
  • You have access to the htpasswd utility. On Red Hat Enterprise Linux this is available by installing the httpd-tools package.
  • You have cluster administrator privileges.

Procedure

  1. Retrieve the htpasswd file from the htpass-secret Secret object and save the file to your file system:

    $ oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd
  2. Add or remove users from the users.htpasswd file.

    • To add a new user:

      $ htpasswd -bB users.htpasswd <username> <password>

      Example output

      Adding password for user <username>

    • To remove an existing user:

      $ htpasswd -D users.htpasswd <username>

      Example output

      Deleting password for user <username>

  3. Replace the htpass-secret Secret object with the updated users in the users.htpasswd file:

    $ oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -
    Tip

    You can alternatively apply the following YAML to replace the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: htpass-secret
      namespace: openshift-config
    type: Opaque
    data:
      htpasswd: <base64_encoded_htpasswd_file_contents>
  4. If you removed one or more users, you must additionally remove existing resources for each user.

    1. Delete the User object:

      $ oc delete user <username>

      Example output

      user.user.openshift.io "<username>" deleted

      Be sure to remove the user, otherwise the user can continue using their token as long as it has not expired.

    2. Delete the Identity object for the user:

      $ oc delete identity my_htpasswd_provider:<username>

      Example output

      identity.user.openshift.io "my_htpasswd_provider:<username>" deleted

7.1.8. Configuring identity providers using the web console

Configure your identity provider (IDP) through the web console instead of the CLI.

Prerequisites

  • You must be logged in to the web console as a cluster administrator.

Procedure

  1. Navigate to AdministrationCluster Settings.
  2. Under the Configuration tab, click OAuth.
  3. Under the Identity Providers section, select your identity provider from the Add drop-down menu.
Note

You can specify multiple IDPs through the web console without overwriting existing IDPs.

7.2. Configuring a Keystone identity provider

Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. This configuration allows users to log in to OpenShift Container Platform with their Keystone credentials.

7.2.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.2.2. About Keystone authentication

Keystone is an OpenStack project that provides identity, token, catalog, and policy services.

You can configure the integration with Keystone so that the new OpenShift Container Platform users are based on either the Keystone user names or unique Keystone IDs. With both methods, users log in by entering their Keystone user name and password. Basing the OpenShift Container Platform users on the Keystone ID is more secure because if you delete a Keystone user and create a new Keystone user with that user name, the new user might have access to the old user’s resources.

7.2.3. Creating the secret

Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys.

Procedure

  • Create a Secret object that contains the key and certificate by using the following command:

    $ oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: openshift-config
    type: kubernetes.io/tls
    data:
      tls.crt: <base64_encoded_cert>
      tls.key: <base64_encoded_key>

7.2.4. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.2.5. Sample Keystone CR

The following custom resource (CR) shows the parameters and acceptable values for a Keystone identity provider.

Keystone CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: keystoneidp 1
    mappingMethod: claim 2
    type: Keystone
    keystone:
      domainName: default 3
      url: https://keystone.example.com:5000 4
      ca: 5
        name: ca-config-map
      tlsClientCert: 6
        name: client-cert-secret
      tlsClientKey: 7
        name: client-key-secret

1
This provider name is prefixed to provider user names to form an identity name.
2
Controls how mappings are established between this provider’s identities and User objects.
3
Keystone domain name. In Keystone, usernames are domain-specific. Only a single domain is supported.
4
The URL to use to connect to the Keystone server (required). This must use https.
5
Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL.
6
Optional: Reference to an OpenShift Container Platform Secret object containing the client certificate to present when making requests to the configured URL.
7
Reference to an OpenShift Container Platform Secret object containing the key for the client certificate. Required if tlsClientCert is specified.

Additional resources

7.2.6. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Log in to the cluster as a user from your identity provider, entering the password when prompted.

    $ oc login -u <username>
  3. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.3. Configuring an LDAP identity provider

Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication.

7.3.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.3.2. About LDAP authentication

During authentication, the LDAP directory is searched for an entry that matches the provided user name. If a single unique match is found, a simple bind is attempted using the distinguished name (DN) of the entry plus the provided password.

These are the steps taken:

  1. Generate a search filter by combining the attribute and filter in the configured url with the user-provided user name.
  2. Search the directory using the generated filter. If the search does not return exactly one entry, deny access.
  3. Attempt to bind to the LDAP server using the DN of the entry retrieved from the search, and the user-provided password.
  4. If the bind is unsuccessful, deny access.
  5. If the bind is successful, build an identity using the configured attributes as the identity, email address, display name, and preferred user name.

The configured url is an RFC 2255 URL, which specifies the LDAP host and search parameters to use. The syntax of the URL is:

ldap://host:port/basedn?attribute?scope?filter

For this URL:

URL componentDescription

ldap

For regular LDAP, use the string ldap. For secure LDAP (LDAPS), use ldaps instead.

host:port

The name and port of the LDAP server. Defaults to localhost:389 for ldap and localhost:636 for LDAPS.

basedn

The DN of the branch of the directory where all searches should start from. At the very least, this must be the top of your directory tree, but it could also specify a subtree in the directory.

attribute

The attribute to search for. Although RFC 2255 allows a comma-separated list of attributes, only the first attribute will be used, no matter how many are provided. If no attributes are provided, the default is to use uid. It is recommended to choose an attribute that will be unique across all entries in the subtree you will be using.

scope

The scope of the search. Can be either one or sub. If the scope is not provided, the default is to use a scope of sub.

filter

A valid LDAP search filter. If not provided, defaults to (objectClass=*)

When doing searches, the attribute, filter, and provided user name are combined to create a search filter that looks like:

(&(<filter>)(<attribute>=<username>))

For example, consider a URL of:

ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)

When a client attempts to connect using a user name of bob, the resulting search filter will be (&(enabled=true)(cn=bob)).

If the LDAP directory requires authentication to search, specify a bindDN and bindPassword to use to perform the entry search.

7.3.3. Creating the LDAP secret

To use the identity provider, you must define an OpenShift Container Platform Secret object that contains the bindPassword field.

Procedure

  • Create a Secret object that contains the bindPassword field:

    $ oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1
    1
    The secret key containing the bindPassword for the --from-literal argument must be called bindPassword.
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: ldap-secret
      namespace: openshift-config
    type: Opaque
    data:
      bindPassword: <base64_encoded_bind_password>

7.3.4. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.3.5. Sample LDAP CR

The following custom resource (CR) shows the parameters and acceptable values for an LDAP identity provider.

LDAP CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: ldapidp 1
    mappingMethod: claim 2
    type: LDAP
    ldap:
      attributes:
        id: 3
        - dn
        email: 4
        - mail
        name: 5
        - cn
        preferredUsername: 6
        - uid
      bindDN: "" 7
      bindPassword: 8
        name: ldap-secret
      ca: 9
        name: ca-config-map
      insecure: false 10
      url: "ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid" 11

1
This provider name is prefixed to the returned user ID to form an identity name.
2
Controls how mappings are established between this provider’s identities and User objects.
3
List of attributes to use as the identity. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. Defined attributes are retrieved as raw, allowing for binary values to be used.
4
List of attributes to use as the email address. First non-empty attribute is used.
5
List of attributes to use as the display name. First non-empty attribute is used.
6
List of attributes to use as the preferred user name when provisioning a user for this identity. First non-empty attribute is used.
7
Optional DN to use to bind during the search phase. Must be set if bindPassword is defined.
8
Optional reference to an OpenShift Container Platform Secret object containing the bind password. Must be set if bindDN is defined.
9
Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only used when insecure is false.
10
When true, no TLS connection is made to the server. When false, ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to TLS. This must be set to false when ldaps:// URLs are in use, as these URLs always attempt to connect using TLS.
11
An RFC 2255 URL which specifies the LDAP host and search parameters to use.
Note

To whitelist users for an LDAP integration, use the lookup mapping method. Before a login from LDAP would be allowed, a cluster administrator must create an Identity object and a User object for each LDAP user.

Additional resources

7.3.6. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Log in to the cluster as a user from your identity provider, entering the password when prompted.

    $ oc login -u <username>
  3. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.4. Configuring a basic authentication identity provider

Configure the basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic back-end integration mechanism.

7.4.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.4.2. About basic authentication

Basic authentication is a generic back-end integration mechanism that allows users to log in to OpenShift Container Platform with credentials validated against a remote identity provider.

Because basic authentication is generic, you can use this identity provider for advanced authentication configurations.

Important

Basic authentication must use an HTTPS connection to the remote server to prevent potential snooping of the user ID and password and man-in-the-middle attacks.

With basic authentication configured, users send their user name and password to OpenShift Container Platform, which then validates those credentials against a remote server by making a server-to-server request, passing the credentials as a basic authentication header. This requires users to send their credentials to OpenShift Container Platform during login.

Note

This only works for user name/password login mechanisms, and OpenShift Container Platform must be able to make network requests to the remote authentication server.

User names and passwords are validated against a remote URL that is protected by basic authentication and returns JSON.

A 401 response indicates failed authentication.

A non-200 status, or the presence of a non-empty "error" key, indicates an error:

{"error":"Error message"}

A 200 status with a sub (subject) key indicates success:

{"sub":"userid"} 1
1
The subject must be unique to the authenticated user and must not be able to be modified.

A successful response can optionally provide additional data, such as:

  • A display name using the name key. For example:

    {"sub":"userid", "name": "User Name", ...}
  • An email address using the email key. For example:

    {"sub":"userid", "email":"user@example.com", ...}
  • A preferred user name using the preferred_username key. This is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity. For example:

    {"sub":"014fbff9a07c", "preferred_username":"bob", ...}

7.4.3. Creating the secret

Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys.

Procedure

  • Create a Secret object that contains the key and certificate by using the following command:

    $ oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: openshift-config
    type: kubernetes.io/tls
    data:
      tls.crt: <base64_encoded_cert>
      tls.key: <base64_encoded_key>

7.4.4. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.4.5. Sample basic authentication CR

The following custom resource (CR) shows the parameters and acceptable values for a basic authentication identity provider.

Basic authentication CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: basicidp 1
    mappingMethod: claim 2
    type: BasicAuth
    basicAuth:
      url: https://www.example.com/remote-idp 3
      ca: 4
        name: ca-config-map
      tlsClientCert: 5
        name: client-cert-secret
      tlsClientKey: 6
        name: client-key-secret

1
This provider name is prefixed to the returned user ID to form an identity name.
2
Controls how mappings are established between this provider’s identities and User objects.
3
URL accepting credentials in Basic authentication headers.
4
Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL.
5
Optional: Reference to an OpenShift Container Platform Secret object containing the client certificate to present when making requests to the configured URL.
6
Reference to an OpenShift Container Platform Secret object containing the key for the client certificate. Required if tlsClientCert is specified.

Additional resources

7.4.6. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Log in to the cluster as a user from your identity provider, entering the password when prompted.

    $ oc login -u <username>
  3. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.4.7. Example Apache HTTPD configuration for basic identity providers

The basic identify provider (IDP) configuration in OpenShift Container Platform 4 requires that the IDP server respond with JSON for success and failures. You can use CGI scripting in Apache HTTPD to accomplish this. This section provides examples.

Example /etc/httpd/conf.d/login.conf

<VirtualHost *:443>
  # CGI Scripts in here
  DocumentRoot /var/www/cgi-bin

  # SSL Directives
  SSLEngine on
  SSLCipherSuite PROFILE=SYSTEM
  SSLProxyCipherSuite PROFILE=SYSTEM
  SSLCertificateFile /etc/pki/tls/certs/localhost.crt
  SSLCertificateKeyFile /etc/pki/tls/private/localhost.key

  # Configure HTTPD to execute scripts
  ScriptAlias /basic /var/www/cgi-bin

  # Handles a failed login attempt
  ErrorDocument 401 /basic/fail.cgi

  # Handles authentication
  <Location /basic/login.cgi>
    AuthType Basic
    AuthName "Please Log In"
    AuthBasicProvider file
    AuthUserFile /etc/httpd/conf/passwords
    Require valid-user
  </Location>
</VirtualHost>

Example /var/www/cgi-bin/login.cgi

#!/bin/bash
echo "Content-Type: application/json"
echo ""
echo '{"sub":"userid", "name":"'$REMOTE_USER'"}'
exit 0

Example /var/www/cgi-bin/fail.cgi

#!/bin/bash
echo "Content-Type: application/json"
echo ""
echo '{"error": "Login failure"}'
exit 0

7.4.7.1. File requirements

These are the requirements for the files you create on an Apache HTTPD web server:

  • login.cgi and fail.cgi must be executable (chmod +x).
  • login.cgi and fail.cgi must have proper SELinux contexts if SELinux is enabled: restorecon -RFv /var/www/cgi-bin, or ensure that the context is httpd_sys_script_exec_t using ls -laZ.
  • login.cgi is only executed if your user successfully logs in per Require and Auth directives.
  • fail.cgi is executed if the user fails to log in, resulting in an HTTP 401 response.

7.4.8. Basic authentication troubleshooting

The most common issue relates to network connectivity to the backend server. For simple debugging, run curl commands on the master. To test for a successful login, replace the <user> and <password> in the following example command with valid credentials. To test an invalid login, replace them with false credentials.

$ curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp

Successful responses

A 200 status with a sub (subject) key indicates success:

{"sub":"userid"}

The subject must be unique to the authenticated user, and must not be able to be modified.

A successful response can optionally provide additional data, such as:

  • A display name using the name key:

    {"sub":"userid", "name": "User Name", ...}
  • An email address using the email key:

    {"sub":"userid", "email":"user@example.com", ...}
  • A preferred user name using the preferred_username key:

    {"sub":"014fbff9a07c", "preferred_username":"bob", ...}

    The preferred_username key is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity.

Failed responses

  • A 401 response indicates failed authentication.
  • A non-200 status or the presence of a non-empty "error" key indicates an error: {"error":"Error message"}

7.5. Configuring a request header identity provider

Configure the request-header identity provider to identify users from request header values, such as X-Remote-User. It is typically used in combination with an authenticating proxy, which sets the request header value.

7.5.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.5.2. About request header authentication

A request header identity provider identifies users from request header values, such as X-Remote-User. It is typically used in combination with an authenticating proxy, which sets the request header value. The request header identity provider cannot be combined with other identity providers that use direct password logins, such as htpasswd, Keystone, LDAP or basic authentication.

Note

You can also use the request header identity provider for advanced configurations such as the community-supported SAML authentication. Note that this solution is not supported by Red Hat.

For users to authenticate using this identity provider, they must access https://<namespace_route>/oauth/authorize (and subpaths) via an authenticating proxy. To accomplish this, configure the OAuth server to redirect unauthenticated requests for OAuth tokens to the proxy endpoint that proxies to https://<namespace_route>/oauth/authorize.

To redirect unauthenticated requests from clients expecting browser-based login flows:

  • Set the provider.loginURL parameter to the authenticating proxy URL that will authenticate interactive clients and then proxy the request to https://<namespace_route>/oauth/authorize.

To redirect unauthenticated requests from clients expecting WWW-Authenticate challenges:

  • Set the provider.challengeURL parameter to the authenticating proxy URL that will authenticate clients expecting WWW-Authenticate challenges and then proxy the request to https://<namespace_route>/oauth/authorize.

The provider.challengeURL and provider.loginURL parameters can include the following tokens in the query portion of the URL:

  • ${url} is replaced with the current URL, escaped to be safe in a query parameter.

    For example: https://www.example.com/sso-login?then=${url}

  • ${query} is replaced with the current query string, unescaped.

    For example: https://www.example.com/auth-proxy/oauth/authorize?${query}

Important

As of OpenShift Container Platform 4.1, your proxy must support mutual TLS.

7.5.2.1. SSPI connection support on Microsoft Windows
Important

Using SSPI connection support on Microsoft Windows is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The OpenShift CLI (oc) supports the Security Support Provider Interface (SSPI) to allow for SSO flows on Microsft Windows. If you use the request header identity provider with a GSSAPI-enabled proxy to connect an Active Directory server to OpenShift Container Platform, users can automatically authenticate to OpenShift Container Platform by using the oc command line interface from a domain-joined Microsoft Windows computer.

7.5.3. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.5.4. Sample request header CR

The following custom resource (CR) shows the parameters and acceptable values for a request header identity provider.

Request header CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: requestheaderidp 1
    mappingMethod: claim 2
    type: RequestHeader
    requestHeader:
      challengeURL: "https://www.example.com/challenging-proxy/oauth/authorize?${query}" 3
      loginURL: "https://www.example.com/login-proxy/oauth/authorize?${query}" 4
      ca: 5
        name: ca-config-map
      clientCommonNames: 6
      - my-auth-proxy
      headers: 7
      - X-Remote-User
      - SSO-User
      emailHeaders: 8
      - X-Remote-User-Email
      nameHeaders: 9
      - X-Remote-User-Display-Name
      preferredUsernameHeaders: 10
      - X-Remote-User-Login

1
This provider name is prefixed to the user name in the request header to form an identity name.
2
Controls how mappings are established between this provider’s identities and User objects.
3
Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate browser-based clients and then proxy their request to https://<namespace_route>/oauth/authorize. The URL that proxies to https://<namespace_route>/oauth/authorize must end with /authorize (with no trailing slash), and also proxy subpaths, in order for OAuth approval flows to work properly. ${url} is replaced with the current URL, escaped to be safe in a query parameter. ${query} is replaced with the current query string. If this attribute is not defined, then loginURL must be used.
4
Optional: URL to redirect unauthenticated /oauth/authorize requests to, that will authenticate clients which expect WWW-Authenticate challenges, and then proxy them to https://<namespace_route>/oauth/authorize. ${url} is replaced with the current URL, escaped to be safe in a query parameter. ${query} is replaced with the current query string. If this attribute is not defined, then challengeURL must be used.
5
Reference to an OpenShift Container Platform ConfigMap object containing a PEM-encoded certificate bundle. Used as a trust anchor to validate the TLS certificates presented by the remote server.
Important

As of OpenShift Container Platform 4.1, the ca field is required for this identity provider. This means that your proxy must support mutual TLS.

6
Optional: list of common names (cn). If set, a valid client certificate with a Common Name (cn) in the specified list must be presented before the request headers are checked for user names. If empty, any Common Name is allowed. Can only be used in combination with ca.
7
Header names to check, in order, for the user identity. The first header containing a value is used as the identity. Required, case-insensitive.
8
Header names to check, in order, for an email address. The first header containing a value is used as the email address. Optional, case-insensitive.
9
Header names to check, in order, for a display name. The first header containing a value is used as the display name. Optional, case-insensitive.
10
Header names to check, in order, for a preferred user name, if different than the immutable identity determined from the headers specified in headers. The first header containing a value is used as the preferred user name when provisioning. Optional, case-insensitive.

Additional resources

7.5.5. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Log in to the cluster as a user from your identity provider, entering the password when prompted.

    $ oc login -u <username>
  3. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.5.6. Example Apache authentication configuration using request header

This example configures an Apache authentication proxy for the OpenShift Container Platform using the request header identity provider.

Custom proxy configuration

Using the mod_auth_gssapi module is a popular way to configure the Apache authentication proxy using the request header identity provider; however, it is not required. Other proxies can easily be used if the following requirements are met:

  • Block the X-Remote-User header from client requests to prevent spoofing.
  • Enforce client certificate authentication in the RequestHeaderIdentityProvider configuration.
  • Require the X-Csrf-Token header be set for all authentication requests using the challenge flow.
  • Make sure only the /oauth/authorize endpoint and its subpaths are proxied; redirects must be rewritten to allow the backend server to send the client to the correct location.
  • The URL that proxies to https://<namespace_route>/oauth/authorize must end with /authorize with no trailing slash. For example, https://proxy.example.com/login-proxy/authorize?…​ must proxy to https://<namespace_route>/oauth/authorize?…​.
  • Subpaths of the URL that proxies to https://<namespace_route>/oauth/authorize must proxy to subpaths of https://<namespace_route>/oauth/authorize. For example, https://proxy.example.com/login-proxy/authorize/approve?…​ must proxy to https://<namespace_route>/oauth/authorize/approve?…​.
Note

The https://<namespace_route> address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication.

Configuring Apache authentication using request header

This example uses the mod_auth_gssapi module to configure an Apache authentication proxy using the request header identity provider.

Prerequisites

  • Obtain the mod_auth_gssapi module from the Optional channel. You must have the following packages installed on your local machine:

    • httpd
    • mod_ssl
    • mod_session
    • apr-util-openssl
    • mod_auth_gssapi
  • Generate a CA for validating requests that submit the trusted header. Define an OpenShift Container Platform ConfigMap object containing the CA. This is done by running:

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1
    1
    The CA must be stored in the ca.crt key of the ConfigMap object.
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>
  • Generate a client certificate for the proxy. You can generate this certificate by using any x509 certificate tooling. The client certificate must be signed by the CA you generated for validating requests that submit the trusted header.
  • Create the custom resource (CR) for your identity providers.

Procedure

This proxy uses a client certificate to connect to the OAuth server, which is configured to trust the X-Remote-User header.

  1. Create the certificate for the Apache configuration. The certificate that you specify as the SSLProxyMachineCertificateFile parameter value is the proxy’s client certificate that is used to authenticate the proxy to the server. It must use TLS Web Client Authentication as the extended key type.
  2. Create the Apache configuration. Use the following template to provide your required settings and values:

    Important

    Carefully review the template and customize its contents to fit your environment.

    LoadModule request_module modules/mod_request.so
    LoadModule auth_gssapi_module modules/mod_auth_gssapi.so
    # Some Apache configurations might require these modules.
    # LoadModule auth_form_module modules/mod_auth_form.so
    # LoadModule session_module modules/mod_session.so
    
    # Nothing needs to be served over HTTP.  This virtual host simply redirects to
    # HTTPS.
    <VirtualHost *:80>
      DocumentRoot /var/www/html
      RewriteEngine              On
      RewriteRule     ^(.*)$     https://%{HTTP_HOST}$1 [R,L]
    </VirtualHost>
    
    <VirtualHost *:443>
      # This needs to match the certificates you generated.  See the CN and X509v3
      # Subject Alternative Name in the output of:
      # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt
      ServerName www.example.com
    
      DocumentRoot /var/www/html
      SSLEngine on
      SSLCertificateFile /etc/pki/tls/certs/localhost.crt
      SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
      SSLCACertificateFile /etc/pki/CA/certs/ca.crt
    
      SSLProxyEngine on
      SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt
      # It is critical to enforce client certificates. Otherwise, requests can
      # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint
      # directly.
      SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem
    
      # To use the challenging-proxy, an X-Csrf-Token must be present.
      RewriteCond %{REQUEST_URI} ^/challenging-proxy
      RewriteCond %{HTTP:X-Csrf-Token} ^$ [NC]
      RewriteRule ^.* - [F,L]
    
      <Location /challenging-proxy/oauth/authorize>
          # Insert your backend server name/ip here.
          ProxyPass https://<namespace_route>/oauth/authorize
          AuthName "SSO Login"
          # For Kerberos
          AuthType GSSAPI
          Require valid-user
          RequestHeader set X-Remote-User %{REMOTE_USER}s
    
          GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab
          # Enable the following if you want to allow users to fallback
          # to password based authentication when they do not have a client
          # configured to perform kerberos authentication.
          GssapiBasicAuth On
    
          # For ldap:
          # AuthBasicProvider ldap
          # AuthLDAPURL "ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)"
        </Location>
    
        <Location /login-proxy/oauth/authorize>
        # Insert your backend server name/ip here.
        ProxyPass https://<namespace_route>/oauth/authorize
    
          AuthName "SSO Login"
          AuthType GSSAPI
          Require valid-user
          RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER
    
          GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab
          # Enable the following if you want to allow users to fallback
          # to password based authentication when they do not have a client
          # configured to perform kerberos authentication.
          GssapiBasicAuth On
    
          ErrorDocument 401 /login.html
        </Location>
    
    </VirtualHost>
    
    RequestHeader unset X-Remote-User
    Note

    The https://<namespace_route> address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication.

  3. Update the identityProviders stanza in the custom resource (CR):

    identityProviders:
      - name: requestheaderidp
        type: RequestHeader
        requestHeader:
          challengeURL: "https://<namespace_route>/challenging-proxy/oauth/authorize?${query}"
          loginURL: "https://<namespace_route>/login-proxy/oauth/authorize?${query}"
          ca:
            name: ca-config-map
            clientCommonNames:
            - my-auth-proxy
            headers:
            - X-Remote-User
  4. Verify the configuration.

    1. Confirm that you can bypass the proxy by requesting a token by supplying the correct client certificate and header:

      # curl -L -k -H "X-Remote-User: joe" \
         --cert /etc/pki/tls/certs/authproxy.pem \
         https://<namespace_route>/oauth/token/request
    2. Confirm that requests that do not supply the client certificate fail by requesting a token without the certificate:

      # curl -L -k -H "X-Remote-User: joe" \
         https://<namespace_route>/oauth/token/request
    3. Confirm that the challengeURL redirect is active:

      # curl -k -v -H 'X-Csrf-Token: 1' \
         https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token

      Copy the challengeURL redirect to use in the next step.

    4. Run this command to show a 401 response with a WWW-Authenticate basic challenge, a negotiate challenge, or both challenges:

      # curl -k -v -H 'X-Csrf-Token: 1' \
         <challengeURL_redirect + query>
    5. Test logging in to the OpenShift CLI (oc) with and without using a Kerberos ticket:

      1. If you generated a Kerberos ticket by using kinit, destroy it:

        # kdestroy -c cache_name 1
        1
        Make sure to provide the name of your Kerberos cache.
      2. Log in to the oc tool by using your Kerberos credentials:

        # oc login -u <username>

        Enter your Kerberos password at the prompt.

      3. Log out of the oc tool:

        # oc logout
      4. Use your Kerberos credentials to get a ticket:

        # kinit

        Enter your Kerberos user name and password at the prompt.

      5. Confirm that you can log in to the oc tool:

        # oc login

        If your configuration is correct, you are logged in without entering separate credentials.

7.6. Configuring a GitHub or GitHub Enterprise identity provider

Configure the github identity provider to validate user names and passwords against GitHub or GitHub Enterprise’s OAuth authentication server. OAuth facilitates a token exchange flow between OpenShift Container Platform and GitHub or GitHub Enterprise.

You can use the GitHub integration to connect to either GitHub or GitHub Enterprise. For GitHub Enterprise integrations, you must provide the hostname of your instance and can optionally provide a ca certificate bundle to use in requests to the server.

Note

The following steps apply to both GitHub and GitHub Enterprise unless noted.

7.6.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.6.2. About GitHub authentication

Configuring GitHub authentication allows users to log in to OpenShift Container Platform with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Container Platform cluster, you can restrict access to only those in specific GitHub organizations.

7.6.3. Registering a GitHub application

To use GitHub or GitHub Enterprise as an identity provider, you must register an application to use.

Procedure

  1. Register an application on GitHub:

  2. Enter an application name, for example My OpenShift Install.
  3. Enter a homepage URL, such as https://oauth-openshift.apps.<cluster-name>.<cluster-domain>.
  4. Optional: Enter an application description.
  5. Enter the authorization callback URL, where the end of the URL contains the identity provider name:

    https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>

    For example:

    https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github
  6. Click Register application. GitHub provides a client ID and a client secret. You need these values to complete the identity provider configuration.

7.6.4. Creating the secret

Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys.

Procedure

  • Create a Secret object containing a string by using the following command:

    $ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: openshift-config
    type: Opaque
    data:
      clientSecret: <base64_encoded_client_secret>
  • You can define a Secret object containing the contents of a file, such as a certificate file, by using the following command:

    $ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config

7.6.5. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Note

This procedure is only required for GitHub Enterprise.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.6.6. Sample GitHub CR

The following custom resource (CR) shows the parameters and acceptable values for a GitHub identity provider.

GitHub CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: githubidp 1
    mappingMethod: claim 2
    type: GitHub
    github:
      ca: 3
        name: ca-config-map
      clientID: {...} 4
      clientSecret: 5
        name: github-secret
      hostname: ... 6
      organizations: 7
      - myorganization1
      - myorganization2
      teams: 8
      - myorganization1/team-a
      - myorganization2/team-b

1
This provider name is prefixed to the GitHub numeric user ID to form an identity name. It is also used to build the callback URL.
2
Controls how mappings are established between this provider’s identities and User objects.
3
Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only for use in GitHub Enterprise with a non-publicly trusted root certificate.
4
The client ID of a registered GitHub OAuth application. The application must be configured with a callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>.
5
Reference to an OpenShift Container Platform Secret object containing the client secret issued by GitHub.
6
For GitHub Enterprise, you must provide the hostname of your instance, such as example.com. This value must match the GitHub Enterprise hostname value in in the /setup/settings file and cannot include a port number. If this value is not set, then either teams or organizations must be defined. For GitHub, omit this parameter.
7
The list of organizations. Either the organizations or teams field must be set unless the hostname field is set, or if mappingMethod is set to lookup. Cannot be used in combination with the teams field.
8
The list of teams. Either the teams or organizations field must be set unless the hostname field is set, or if mappingMethod is set to lookup. Cannot be used in combination with the organizations field.
Note

If organizations or teams is specified, only GitHub users that are members of at least one of the listed organizations will be allowed to log in. If the GitHub OAuth application configured in clientID is not owned by the organization, an organization owner must grant third-party access to use this option. This can be done during the first GitHub login by the organization’s administrator, or from the GitHub organization settings.

Additional resources

7.6.7. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Obtain a token from the OAuth server.

    As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token.

    You can also access this page from the web console by navigating to (?) HelpCommand Line ToolsCopy Login Command.

  3. Log in to the cluster, passing in the token to authenticate.

    $ oc login --token=<token>
    Note

    This identity provider does not support logging in with a user name and password.

  4. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.7. Configuring a GitLab identity provider

Configure the gitlab identity provider using GitLab.com or any other GitLab instance as an identity provider.

7.7.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.7.2. About GitLab authentication

Configuring GitLab authentication allows users to log in to OpenShift Container Platform with their GitLab credentials.

If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration. If you use GitLab version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth.

7.7.3. Creating the secret

Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys.

Procedure

  • Create a Secret object containing a string by using the following command:

    $ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: openshift-config
    type: Opaque
    data:
      clientSecret: <base64_encoded_client_secret>
  • You can define a Secret object containing the contents of a file, such as a certificate file, by using the following command:

    $ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config

7.7.4. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Note

This procedure is only required for GitHub Enterprise.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.7.5. Sample GitLab CR

The following custom resource (CR) shows the parameters and acceptable values for a GitLab identity provider.

GitLab CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: gitlabidp 1
    mappingMethod: claim 2
    type: GitLab
    gitlab:
      clientID: {...} 3
      clientSecret: 4
        name: gitlab-secret
      url: https://gitlab.com 5
      ca: 6
        name: ca-config-map

1
This provider name is prefixed to the GitLab numeric user ID to form an identity name. It is also used to build the callback URL.
2
Controls how mappings are established between this provider’s identities and User objects.
3
The client ID of a registered GitLab OAuth application. The application must be configured with a callback URL of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>.
4
Reference to an OpenShift Container Platform Secret object containing the client secret issued by GitLab.
5
The host URL of a GitLab provider. This could either be https://gitlab.com/ or any other self hosted instance of GitLab.
6
Optional: Reference to an OpenShift Container Platform ConfigMap object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL.

Additional resources

7.7.6. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Log in to the cluster as a user from your identity provider, entering the password when prompted.

    $ oc login -u <username>
  3. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.8. Configuring a Google identity provider

Configure the google identity provider using the Google OpenID Connect integration.

7.8.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.8.2. About Google authentication

Using Google as an identity provider allows any Google user to authenticate to your server. You can limit authentication to members of a specific hosted domain with the hostedDomain configuration attribute.

Note

Using Google as an identity provider requires users to get a token using <namespace_route>/oauth/token/request to use with command-line tools.

7.8.3. Creating the secret

Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys.

Procedure

  • Create a Secret object containing a string by using the following command:

    $ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: openshift-config
    type: Opaque
    data:
      clientSecret: <base64_encoded_client_secret>
  • You can define a Secret object containing the contents of a file, such as a certificate file, by using the following command:

    $ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config

7.8.4. Sample Google CR

The following custom resource (CR) shows the parameters and acceptable values for a Google identity provider.

Google CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: googleidp 1
    mappingMethod: claim 2
    type: Google
    google:
      clientID: {...} 3
      clientSecret: 4
        name: google-secret
      hostedDomain: "example.com" 5

1
This provider name is prefixed to the Google numeric user ID to form an identity name. It is also used to build the redirect URL.
2
Controls how mappings are established between this provider’s identities and User objects.
3
The client ID of a registered Google project. The project must be configured with a redirect URI of https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>.
4
Reference to an OpenShift Container Platform Secret object containing the client secret issued by Google.
5
A hosted domain used to restrict sign-in accounts. Optional if the lookup mappingMethod is used. If empty, any Google account is allowed to authenticate.

Additional resources

7.8.5. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Obtain a token from the OAuth server.

    As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token.

    You can also access this page from the web console by navigating to (?) HelpCommand Line ToolsCopy Login Command.

  3. Log in to the cluster, passing in the token to authenticate.

    $ oc login --token=<token>
    Note

    This identity provider does not support logging in with a user name and password.

  4. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.9. Configuring an OpenID Connect identity provider

Configure the oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow.

7.9.1. About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

7.9.2. About OpenID Connect authentication

The Authentication Operator in OpenShift Container Platform requires that the configured OpenID Connect identity provider implements the OpenID Connect Discovery specification.

Note

ID Token and UserInfo decryptions are not supported.

By default, the openid scope is requested. If required, extra scopes can be specified in the extraScopes field.

Claims are read from the JWT id_token returned from the OpenID identity provider and, if specified, from the JSON returned by the UserInfo URL.

At least one claim must be configured to use as the user’s identity. The standard identity claim is sub.

You can also indicate which claims to use as the user’s preferred user name, display name, and email address. If multiple claims are specified, the first one with a non-empty value is used. The following table lists the standard claims:

ClaimDescription

sub

Short for "subject identifier." The remote identity for the user at the issuer.

preferred_username

The preferred user name when provisioning a user. A shorthand name that the user wants to be referred to as, such as janedoe. Typically a value that corresponding to the user’s login or username in the authentication system, such as username or email.

email

Email address.

name

Display name.

See the OpenID claims documentation for more information.

Note

Unless your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, users must get a token from <namespace_route>/oauth/token/request to use with command-line tools.

7.9.3. Supported OIDC providers

Red Hat tests and supports specific OpenID Connect (OIDC) providers with OpenShift Container Platform. The following OpenID Connect (OIDC) providers are tested and supported with OpenShift Container Platform. Using an OIDC provider that is not on the following list might work with OpenShift Container Platform, but the provider was not tested by Red Hat and therefore is not supported by Red Hat.

  • GitLab
  • Google
  • Keycloak
  • Okta
  • Ping Identity
  • Red Hat Single Sign-On

7.9.4. Creating the secret

Identity providers use OpenShift Container Platform Secret objects in the openshift-config namespace to contain the client secret, client certificates, and keys.

Procedure

  • Create a Secret object containing a string by using the following command:

    $ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: openshift-config
    type: Opaque
    data:
      clientSecret: <base64_encoded_client_secret>
  • You can define a Secret object containing the contents of a file, such as a certificate file, by using the following command:

    $ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config

7.9.5. Creating a config map

Identity providers use OpenShift Container Platform ConfigMap objects in the openshift-config namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.

Note

This procedure is only required for GitHub Enterprise.

Procedure

  • Define an OpenShift Container Platform ConfigMap object containing the certificate authority by using the following command. The certificate authority must be stored in the ca.crt key of the ConfigMap object.

    $ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
    Tip

    You can alternatively apply the following YAML to create the config map:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ca-config-map
      namespace: openshift-config
    data:
      ca.crt: |
        <CA_certificate_PEM>

7.9.6. Sample OpenID Connect CRs

The following custom resources (CRs) show the parameters and acceptable values for an OpenID Connect identity provider.

If you must specify a custom certificate bundle, extra scopes, extra authorization request parameters, or a userInfo URL, use the full OpenID Connect CR.

Standard OpenID Connect CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: oidcidp 1
    mappingMethod: claim 2
    type: OpenID
    openID:
      clientID: ... 3
      clientSecret: 4
        name: idp-secret
      claims: 5
        preferredUsername:
        - preferred_username
        name:
        - name
        email:
        - email
        groups:
        - groups
      issuer: https://www.idp-issuer.com 6

1
This provider name is prefixed to the value of the identity claim to form an identity name. It is also used to build the redirect URL.
2
Controls how mappings are established between this provider’s identities and User objects.
3
The client ID of a client registered with the OpenID provider. The client must be allowed to redirect to https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>.
4
A reference to an OpenShift Container Platform Secret object containing the client secret.
5
The list of claims to use as the identity. The first non-empty claim is used.
6
The Issuer Identifier described in the OpenID spec. Must use https without query or fragment component.

Full OpenID Connect CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: oidcidp
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: ...
      clientSecret:
        name: idp-secret
      ca: 1
        name: ca-config-map
      extraScopes: 2
      - email
      - profile
      extraAuthorizeParameters: 3
        include_granted_scopes: "true"
      claims:
        preferredUsername: 4
        - preferred_username
        - email
        name: 5
        - nickname
        - given_name
        - name
        email: 6
        - custom_email_claim
        - email
        groups: 7
        - groups
      issuer: https://www.idp-issuer.com

1
Optional: Reference to an OpenShift Container Platform config map containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL.
2
Optional: The list of scopes to request, in addition to the openid scope, during the authorization token request.
3
Optional: A map of extra parameters to add to the authorization token request.
4
The list of claims to use as the preferred user name when provisioning a user for this identity. The first non-empty claim is used.
5
The list of claims to use as the display name. The first non-empty claim is used.
6
The list of claims to use as the email address. The first non-empty claim is used.
7
The list of claims to use to synchronize groups from the OpenID Connect provider to OpenShift Container Platform upon user login. The first non-empty claim is used.

Additional resources

7.9.7. Adding an identity provider to your cluster

After you install your cluster, add an identity provider to it so your users can authenticate.

Prerequisites

  • Create an OpenShift Container Platform cluster.
  • Create the custom resource (CR) for your identity providers.
  • You must be logged in as an administrator.

Procedure

  1. Apply the defined CR:

    $ oc apply -f </path/to/CR>
    Note

    If a CR does not exist, oc apply creates a new CR and might trigger the following warning: Warning: oc apply should be used on resources created by either oc create --save-config or oc apply. In this case you can safely ignore this warning.

  2. Obtain a token from the OAuth server.

    As long as the kubeadmin user has been removed, the oc login command provides instructions on how to access a web page where you can retrieve the token.

    You can also access this page from the web console by navigating to (?) HelpCommand Line ToolsCopy Login Command.

  3. Log in to the cluster, passing in the token to authenticate.

    $ oc login --token=<token>
    Note

    If your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, you can log in with a user name and password. You might need to take steps to enable the ROPC grant flow for your identity provider.

    After the OIDC identity provider is configured in OpenShift Container Platform, you can log in by using the following command, which prompts for your user name and password:

    $ oc login -u <identity_provider_username> --server=<api_server_url_and_port>
  4. Confirm that the user logged in successfully, and display the user name.

    $ oc whoami

7.9.8. Configuring identity providers using the web console

Configure your identity provider (IDP) through the web console instead of the CLI.

Prerequisites

  • You must be logged in to the web console as a cluster administrator.

Procedure

  1. Navigate to AdministrationCluster Settings.
  2. Under the Configuration tab, click OAuth.
  3. Under the Identity Providers section, select your identity provider from the Add drop-down menu.
Note

You can specify multiple IDPs through the web console without overwriting existing IDPs.

Chapter 8. Using RBAC to define and apply permissions

8.1. RBAC overview

Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project.

Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.

Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.

Authorization is managed using:

Authorization objectDescription

Rules

Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods.

Roles

Collections of rules. You can associate, or bind, users and groups to multiple roles.

Bindings

Associations between users and/or groups with a role.

There are two levels of RBAC roles and bindings that control authorization:

RBAC levelDescription

Cluster RBAC

Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.

Local RBAC

Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.

A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.

This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles.

During evaluation, both the cluster role bindings and the local role bindings are used. For example:

  1. Cluster-wide "allow" rules are checked.
  2. Locally-bound "allow" rules are checked.
  3. Deny by default.

8.1.1. Default cluster roles

OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally.

Important

It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly.

Default cluster roleDescription

admin

A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota.

basic-user

A user that can get basic information about projects and users.

cluster-admin

A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project.

cluster-status

A user that can get basic cluster status information.

cluster-reader

A user that can get or view most of the objects but cannot modify them.

edit

A user that can modify most objects in a project but does not have the power to view or modify roles or bindings.

self-provisioner

A user that can create their own projects.

view

A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings.

Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin, plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin.

The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below.

OpenShift Container Platform RBAC
Warning

The get pods/exec, get pods/*, and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges.

8.1.2. Evaluating authorization

OpenShift Container Platform evaluates authorization by using:

Identity
The user name and list of groups that the user belongs to.
Action

The action you perform. In most cases, this consists of:

  • Project: The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities.
  • Verb : The action itself: get, list, create, update, delete, deletecollection, or watch.
  • Resource name: The API endpoint that you access.
Bindings
The full list of bindings, the associations between users or groups with a role.

OpenShift Container Platform evaluates authorization by using the following steps:

  1. The identity and the project-scoped action is used to find all bindings that apply to the user or their groups.
  2. Bindings are used to locate all the roles that apply.
  3. Roles are used to find all the rules that apply.
  4. The action is checked against each rule to find a match.
  5. If no matching rule is found, the action is then denied by default.
Tip

Remember that users and groups can be associated with, or bound to, multiple roles at the same time.

Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with.

Important

The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin.

Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level.

8.1.2.1. Cluster role aggregation

The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation, where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources.

8.2. Projects and namespaces

A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces.

Namespaces provide a unique scope for:

  • Named resources to avoid basic naming collisions.
  • Delegated management authority to trusted users.
  • The ability to limit community resource consumption.

Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.

A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects.

Projects can have a separate name, displayName, and description.

  • The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters.
  • The optional displayName is how the project is displayed in the web console (defaults to name).
  • The optional description can be a more detailed description of the project and is also visible in the web console.

Each project scopes its own set of:

ObjectDescription

Objects

Pods, services, replication controllers, etc.

Policies

Rules for which users can or cannot perform actions on objects.

Constraints

Quotas for each kind of object that can be limited.

Service accounts

Service accounts act automatically with designated access to objects in the project.

Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects.

Developers and administrators can interact with projects by using the CLI or the web console.

8.3. Default projects

OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical.

Note

You cannot assign an SCC to pods created in one of the default namespaces: default, kube-system, kube-public, openshift-node, openshift-infra, and openshift. You cannot use these namespaces for running pods or services.

8.4. Viewing cluster roles and bindings

You can use the oc CLI to view cluster roles and bindings by using the oc describe command.

Prerequisites

  • Install the oc CLI.
  • Obtain permission to view the cluster roles and bindings.

Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings.

Procedure

  1. To view the cluster roles and their associated rule sets:

    $ oc describe clusterrole.rbac

    Example output

    Name:         admin
    Labels:       kubernetes.io/bootstrapping=rbac-defaults
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
      Resources                                                  Non-Resource URLs  Resource Names  Verbs
      ---------                                                  -----------------  --------------  -----
      .packages.apps.redhat.com                                  []                 []              [* create update patch delete get list watch]
      imagestreams                                               []                 []              [create delete deletecollection get list patch update watch create get list watch]
      imagestreams.image.openshift.io                            []                 []              [create delete deletecollection get list patch update watch create get list watch]
      secrets                                                    []                 []              [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update]
      buildconfigs/webhooks                                      []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildconfigs                                               []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildlogs                                                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs/scale                                    []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs                                          []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamimages                                          []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreammappings                                        []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamtags                                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      processedtemplates                                         []                 []              [create delete deletecollection get list patch update watch get list watch]
      routes                                                     []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateconfigs                                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateinstances                                          []                 []              [create delete deletecollection get list patch update watch get list watch]
      templates                                                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs.apps.openshift.io/scale                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs.apps.openshift.io                        []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildconfigs.build.openshift.io/webhooks                   []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildconfigs.build.openshift.io                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildlogs.build.openshift.io                               []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamimages.image.openshift.io                       []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreammappings.image.openshift.io                     []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamtags.image.openshift.io                         []                 []              [create delete deletecollection get list patch update watch get list watch]
      routes.route.openshift.io                                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      processedtemplates.template.openshift.io                   []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateconfigs.template.openshift.io                      []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateinstances.template.openshift.io                    []                 []              [create delete deletecollection get list patch update watch get list watch]
      templates.template.openshift.io                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      serviceaccounts                                            []                 []              [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch]
      imagestreams/secrets                                       []                 []              [create delete deletecollection get list patch update watch]
      rolebindings                                               []                 []              [create delete deletecollection get list patch update watch]
      roles                                                      []                 []              [create delete deletecollection get list patch update watch]
      rolebindings.authorization.openshift.io                    []                 []              [create delete deletecollection get list patch update watch]
      roles.authorization.openshift.io                           []                 []              [create delete deletecollection get list patch update watch]
      imagestreams.image.openshift.io/secrets                    []                 []              [create delete deletecollection get list patch update watch]
      rolebindings.rbac.authorization.k8s.io                     []                 []              [create delete deletecollection get list patch update watch]
      roles.rbac.authorization.k8s.io                            []                 []              [create delete deletecollection get list patch update watch]
      networkpolicies.extensions                                 []                 []              [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
      networkpolicies.networking.k8s.io                          []                 []              [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
      configmaps                                                 []                 []              [create delete deletecollection patch update get list watch]
      endpoints                                                  []                 []              [create delete deletecollection patch update get list watch]
      persistentvolumeclaims                                     []                 []              [create delete deletecollection patch update get list watch]
      pods                                                       []                 []              [create delete deletecollection patch update get list watch]
      replicationcontrollers/scale                               []                 []              [create delete deletecollection patch update get list watch]
      replicationcontrollers                                     []                 []              [create delete deletecollection patch update get list watch]
      services                                                   []                 []              [create delete deletecollection patch update get list watch]
      daemonsets.apps                                            []                 []              [create delete deletecollection patch update get list watch]
      deployments.apps/scale                                     []                 []              [create delete deletecollection patch update get list watch]
      deployments.apps                                           []                 []              [create delete deletecollection patch update get list watch]
      replicasets.apps/scale                                     []                 []              [create delete deletecollection patch update get list watch]
      replicasets.apps                                           []                 []              [create delete deletecollection patch update get list watch]
      statefulsets.apps/scale                                    []                 []              [create delete deletecollection patch update get list watch]
      statefulsets.apps                                          []                 []              [create delete deletecollection patch update get list watch]
      horizontalpodautoscalers.autoscaling                       []                 []              [create delete deletecollection patch update get list watch]
      cronjobs.batch                                             []                 []              [create delete deletecollection patch update get list watch]
      jobs.batch                                                 []                 []              [create delete deletecollection patch update get list watch]
      daemonsets.extensions                                      []                 []              [create delete deletecollection patch update get list watch]
      deployments.extensions/scale                               []                 []              [create delete deletecollection patch update get list watch]
      deployments.extensions                                     []                 []              [create delete deletecollection patch update get list watch]
      ingresses.extensions                                       []                 []              [create delete deletecollection patch update get list watch]
      replicasets.extensions/scale                               []                 []              [create delete deletecollection patch update get list watch]
      replicasets.extensions                                     []                 []              [create delete deletecollection patch update get list watch]
      replicationcontrollers.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
      poddisruptionbudgets.policy                                []                 []              [create delete deletecollection patch update get list watch]
      deployments.apps/rollback                                  []                 []              [create delete deletecollection patch update]
      deployments.extensions/rollback                            []                 []              [create delete deletecollection patch update]
      catalogsources.operators.coreos.com                        []                 []              [create update patch delete get list watch]
      clusterserviceversions.operators.coreos.com                []                 []              [create update patch delete get list watch]
      installplans.operators.coreos.com                          []                 []              [create update patch delete get list watch]
      packagemanifests.operators.coreos.com                      []                 []              [create update patch delete get list watch]
      subscriptions.operators.coreos.com                         []                 []              [create update patch delete get list watch]
      buildconfigs/instantiate                                   []                 []              [create]
      buildconfigs/instantiatebinary                             []                 []              [create]
      builds/clone                                               []                 []              [create]
      deploymentconfigrollbacks                                  []                 []              [create]
      deploymentconfigs/instantiate                              []                 []              [create]
      deploymentconfigs/rollback                                 []                 []              [create]
      imagestreamimports                                         []                 []              [create]
      localresourceaccessreviews                                 []                 []              [create]
      localsubjectaccessreviews                                  []                 []              [create]
      podsecuritypolicyreviews                                   []                 []              [create]
      podsecuritypolicyselfsubjectreviews                        []                 []              [create]
      podsecuritypolicysubjectreviews                            []                 []              [create]
      resourceaccessreviews                                      []                 []              [create]
      routes/custom-host                                         []                 []              [create]
      subjectaccessreviews                                       []                 []              [create]
      subjectrulesreviews                                        []                 []              [create]
      deploymentconfigrollbacks.apps.openshift.io                []                 []              [create]
      deploymentconfigs.apps.openshift.io/instantiate            []                 []              [create]
      deploymentconfigs.apps.openshift.io/rollback               []                 []              [create]
      localsubjectaccessreviews.authorization.k8s.io             []                 []              [create]
      localresourceaccessreviews.authorization.openshift.io      []                 []              [create]
      localsubjectaccessreviews.authorization.openshift.io       []                 []              [create]
      resourceaccessreviews.authorization.openshift.io           []                 []              [create]
      subjectaccessreviews.authorization.openshift.io            []                 []              [create]
      subjectrulesreviews.authorization.openshift.io             []                 []              [create]
      buildconfigs.build.openshift.io/instantiate                []                 []              [create]
      buildconfigs.build.openshift.io/instantiatebinary          []                 []              [create]
      builds.build.openshift.io/clone                            []                 []              [create]
      imagestreamimports.image.openshift.io                      []                 []              [create]
      routes.route.openshift.io/custom-host                      []                 []              [create]
      podsecuritypolicyreviews.security.openshift.io             []                 []              [create]
      podsecuritypolicyselfsubjectreviews.security.openshift.io  []                 []              [create]
      podsecuritypolicysubjectreviews.security.openshift.io      []                 []              [create]
      jenkins.build.openshift.io                                 []                 []              [edit view view admin edit view]
      builds                                                     []                 []              [get create delete deletecollection get list patch update watch get list watch]
      builds.build.openshift.io                                  []                 []              [get create delete deletecollection get list patch update watch get list watch]
      projects                                                   []                 []              [get delete get delete get patch update]
      projects.project.openshift.io                              []                 []              [get delete get delete get patch update]
      namespaces                                                 []                 []              [get get list watch]
      pods/attach                                                []                 []              [get list watch create delete deletecollection patch update]
      pods/exec                                                  []                 []              [get list watch create delete deletecollection patch update]
      pods/portforward                                           []                 []              [get list watch create delete deletecollection patch update]
      pods/proxy                                                 []                 []              [get list watch create delete deletecollection patch update]
      services/proxy                                             []                 []              [get list watch create delete deletecollection patch update]
      routes/status                                              []                 []              [get list watch update]
      routes.route.openshift.io/status                           []                 []              [get list watch update]
      appliedclusterresourcequotas                               []                 []              [get list watch]
      bindings                                                   []                 []              [get list watch]
      builds/log                                                 []                 []              [get list watch]
      deploymentconfigs/log                                      []                 []              [get list watch]
      deploymentconfigs/status                                   []                 []              [get list watch]
      events                                                     []                 []              [get list watch]
      imagestreams/status                                        []                 []              [get list watch]
      limitranges                                                []                 []              [get list watch]
      namespaces/status                                          []                 []              [get list watch]
      pods/log                                                   []                 []              [get list watch]
      pods/status                                                []                 []              [get list watch]
      replicationcontrollers/status                              []                 []              [get list watch]
      resourcequotas/status                                      []                 []              [get list watch]
      resourcequotas                                             []                 []              [get list watch]
      resourcequotausages                                        []                 []              [get list watch]
      rolebindingrestrictions                                    []                 []              [get list watch]
      deploymentconfigs.apps.openshift.io/log                    []                 []              [get list watch]
      deploymentconfigs.apps.openshift.io/status                 []                 []              [get list watch]
      controllerrevisions.apps                                   []                 []              [get list watch]
      rolebindingrestrictions.authorization.openshift.io         []                 []              [get list watch]
      builds.build.openshift.io/log                              []                 []              [get list watch]
      imagestreams.image.openshift.io/status                     []                 []              [get list watch]
      appliedclusterresourcequotas.quota.openshift.io            []                 []              [get list watch]
      imagestreams/layers                                        []                 []              [get update get]
      imagestreams.image.openshift.io/layers                     []                 []              [get update get]
      builds/details                                             []                 []              [update]
      builds.build.openshift.io/details                          []                 []              [update]
    
    
    Name:         basic-user
    Labels:       <none>
    Annotations:  openshift.io/description: A user that can get basic information about projects.
    	              rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
    	Resources                                           Non-Resource URLs  Resource Names  Verbs
    	  ---------                                           -----------------  --------------  -----
    	  selfsubjectrulesreviews                             []                 []              [create]
    	  selfsubjectaccessreviews.authorization.k8s.io       []                 []              [create]
    	  selfsubjectrulesreviews.authorization.openshift.io  []                 []              [create]
    	  clusterroles.rbac.authorization.k8s.io              []                 []              [get list watch]
    	  clusterroles                                        []                 []              [get list]
    	  clusterroles.authorization.openshift.io             []                 []              [get list]
    	  storageclasses.storage.k8s.io                       []                 []              [get list]
    	  users                                               []                 [~]             [get]
    	  users.user.openshift.io                             []                 [~]             [get]
    	  projects                                            []                 []              [list watch]
    	  projects.project.openshift.io                       []                 []              [list watch]
    	  projectrequests                                     []                 []              [list]
    	  projectrequests.project.openshift.io                []                 []              [list]
    
    Name:         cluster-admin
    Labels:       kubernetes.io/bootstrapping=rbac-defaults
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
    Resources  Non-Resource URLs  Resource Names  Verbs
    ---------  -----------------  --------------  -----
    *.*        []                 []              [*]
               [*]                []              [*]
    
    ...

  2. To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:

    $ oc describe clusterrolebinding.rbac

    Example output

    Name:         alertmanager-main
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  alertmanager-main
    Subjects:
      Kind            Name               Namespace
      ----            ----               ---------
      ServiceAccount  alertmanager-main  openshift-monitoring
    
    
    Name:         basic-users
    Labels:       <none>
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    Role:
      Kind:  ClusterRole
      Name:  basic-user
    Subjects:
      Kind   Name                  Namespace
      ----   ----                  ---------
      Group  system:authenticated
    
    
    Name:         cloud-credential-operator-rolebinding
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  cloud-credential-operator-role
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  default  openshift-cloud-credential-operator
    
    
    Name:         cluster-admin
    Labels:       kubernetes.io/bootstrapping=rbac-defaults
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    Role:
      Kind:  ClusterRole
      Name:  cluster-admin
    Subjects:
      Kind   Name            Namespace
      ----   ----            ---------
      Group  system:masters
    
    
    Name:         cluster-admins
    Labels:       <none>
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    Role:
      Kind:  ClusterRole
      Name:  cluster-admin
    Subjects:
      Kind   Name                   Namespace
      ----   ----                   ---------
      Group  system:cluster-admins
      User   system:admin
    
    
    Name:         cluster-api-manager-rolebinding
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  cluster-api-manager-role
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  default  openshift-machine-api
    
    ...

8.5. Viewing local roles and bindings

You can use the oc CLI to view local roles and bindings by using the oc describe command.

Prerequisites

  • Install the oc CLI.
  • Obtain permission to view the local roles and bindings:

    • Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings.
    • Users with the admin default cluster role bound locally can view and manage roles and bindings in that project.

Procedure

  1. To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project:

    $ oc describe rolebinding.rbac
  2. To view the local role bindings for a different project, add the -n flag to the command:

    $ oc describe rolebinding.rbac -n joe-project

    Example output

    Name:         admin
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  admin
    Subjects:
      Kind  Name        Namespace
      ----  ----        ---------
      User  kube:admin
    
    
    Name:         system:deployers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows deploymentconfigs in this namespace to rollout pods in
                    this namespace.  It is auto-managed by a controller; remove
                    subjects to disa...
    Role:
      Kind:  ClusterRole
      Name:  system:deployer
    Subjects:
      Kind            Name      Namespace
      ----            ----      ---------
      ServiceAccount  deployer  joe-project
    
    
    Name:         system:image-builders
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows builds in this namespace to push images to this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-builder
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  builder  joe-project
    
    
    Name:         system:image-pullers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows all pods in this namespace to pull images from this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-puller
    Subjects:
      Kind   Name                                Namespace
      ----   ----                                ---------
      Group  system:serviceaccounts:joe-project

8.6. Adding roles to users

You can use the oc adm administrator CLI to manage the roles and bindings.

Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands.

You can bind any of the default cluster roles to local users or groups in your project.

Procedure

  1. Add a role to a user in a specific project:

    $ oc adm policy add-role-to-user <role> <user> -n <project>

    For example, you can add the admin role to the alice user in joe project by running:

    $ oc adm policy add-role-to-user admin alice -n joe
    Tip

    You can alternatively apply the following YAML to add the role to the user:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: admin-0
      namespace: joe
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: admin
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: alice
  2. View the local role bindings and verify the addition in the output:

    $ oc describe rolebinding.rbac -n <project>

    For example, to view the local role bindings for the joe project:

    $ oc describe rolebinding.rbac -n joe

    Example output

    Name:         admin
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  admin
    Subjects:
      Kind  Name        Namespace
      ----  ----        ---------
      User  kube:admin
    
    
    Name:         admin-0
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  admin
    Subjects:
      Kind  Name   Namespace
      ----  ----   ---------
      User  alice 1
    
    
    Name:         system:deployers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows deploymentconfigs in this namespace to rollout pods in
                    this namespace.  It is auto-managed by a controller; remove
                    subjects to disa...
    Role:
      Kind:  ClusterRole
      Name:  system:deployer
    Subjects:
      Kind            Name      Namespace
      ----            ----      ---------
      ServiceAccount  deployer  joe
    
    
    Name:         system:image-builders
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows builds in this namespace to push images to this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-builder
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  builder  joe
    
    
    Name:         system:image-pullers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows all pods in this namespace to pull images from this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-puller
    Subjects:
      Kind   Name                                Namespace
      ----   ----                                ---------
      Group  system:serviceaccounts:joe

    1
    The alice user has been added to the admins RoleBinding.

8.7. Creating a local role

You can create a local role for a project and then bind it to a user.

Procedure

  1. To create a local role for a project, run the following command:

    $ oc create role <name> --verb=<verb> --resource=<resource> -n <project>

    In this command, specify:

    • <name>, the local role’s name
    • <verb>, a comma-separated list of the verbs to apply to the role
    • <resource>, the resources that the role applies to
    • <project>, the project name

    For example, to create a local role that allows a user to view pods in the blue project, run the following command:

    $ oc create role podview --verb=get --resource=pod -n blue
  2. To bind the new role to a user, run the following command:

    $ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue

8.8. Creating a cluster role

You can create a cluster role.

Procedure

  1. To create a cluster role, run the following command:

    $ oc create clusterrole <name> --verb=<verb> --resource=<resource>

    In this command, specify:

    • <name>, the local role’s name
    • <verb>, a comma-separated list of the verbs to apply to the role
    • <resource>, the resources that the role applies to

    For example, to create a cluster role that allows a user to view pods, run the following command:

    $ oc create clusterrole podviewonly --verb=get --resource=pod

8.9. Local role binding commands

When you manage a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used.

You can use the following commands for local RBAC management.

Table 8.1. Local role binding operations
CommandDescription

$ oc adm policy who-can <verb> <resource>

Indicates which users can perform an action on a resource.

$ oc adm policy add-role-to-user <role> <username>

Binds a specified role to specified users in the current project.

$ oc adm policy remove-role-from-user <role> <username>

Removes a given role from specified users in the current project.

$ oc adm policy remove-user <username>

Removes specified users and all of their roles in the current project.

$ oc adm policy add-role-to-group <role> <groupname>

Binds a given role to specified groups in the current project.

$ oc adm policy remove-role-from-group <role> <groupname>

Removes a given role from specified groups in the current project.

$ oc adm policy remove-group <groupname>

Removes specified groups and all of their roles in the current project.

8.10. Cluster role binding commands

You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources.

Table 8.2. Cluster role binding operations
CommandDescription

$ oc adm policy add-cluster-role-to-user <role> <username>

Binds a given role to specified users for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-user <role> <username>

Removes a given role from specified users for all projects in the cluster.

$ oc adm policy add-cluster-role-to-group <role> <groupname>

Binds a given role to specified groups for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-group <role> <groupname>

Removes a given role from specified groups for all projects in the cluster.

8.11. Creating a cluster admin

The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources.

Prerequisites

  • You must have created a user to define as the cluster admin.

Procedure

  • Define the user as a cluster admin:

    $ oc adm policy add-cluster-role-to-user cluster-admin <user>

Chapter 9. Removing the kubeadmin user

9.1. The kubeadmin user

OpenShift Container Platform creates a cluster administrator, kubeadmin, after the installation process completes.

This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program’s output. For example:

INFO Install complete!
INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com
INFO Login to the console with user: kubeadmin, password: <provided>

9.2. Removing the kubeadmin user

After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security.

Warning

If you follow this procedure before another user is a cluster-admin, then OpenShift Container Platform must be reinstalled. It is not possible to undo this command.

Prerequisites

  • You must have configured at least one identity provider.
  • You must have added the cluster-admin role to a user.
  • You must be logged in as an administrator.

Procedure

  • Remove the kubeadmin secrets:

    $ oc delete secrets kubeadmin -n kube-system

Chapter 10. Understanding and creating service accounts

10.1. Service accounts overview

A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user’s credentials.

When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user’s credentials. For example, service accounts can allow:

  • Replication controllers to make API calls to create or delete pods.
  • Applications inside containers to make API calls for discovery purposes.
  • External applications to make API calls for monitoring or integration purposes.

Each service account’s user name is derived from its project and name:

system:serviceaccount:<project>:<name>

Every service account is also a member of two groups:

GroupDescription

system:serviceaccounts

Includes all service accounts in the system.

system:serviceaccounts:<project>

Includes all service accounts in the specified project.

Each service account automatically contains two secrets:

  • An API token
  • Credentials for the OpenShift Container Registry

The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place.

10.2. Creating service accounts

You can create a service account in a project and grant it permissions by binding it to a role.

Procedure

  1. Optional: To view the service accounts in the current project:

    $ oc get sa

    Example output

    NAME       SECRETS   AGE
    builder    2         2d
    default    2         2d
    deployer   2         2d

  2. To create a new service account in the current project:

    $ oc create sa <service_account_name> 1
    1
    To create a service account in a different project, specify -n <project_name>.

    Example output

    serviceaccount "robot" created

    Tip

    You can alternatively apply the following YAML to create the service account:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: <service_account_name>
      namespace: <current_project>
  3. Optional: View the secrets for the service account:

    $ oc describe sa robot

    Example output

    Name:		robot
    Namespace:	project1
    Labels:		<none>
    Annotations:	<none>
    
    Image pull secrets:	robot-dockercfg-qzbhb
    
    Mountable secrets: 	robot-token-f4khf
                       	robot-dockercfg-qzbhb
    
    Tokens:            	robot-token-f4khf
                       	robot-token-z8h44

10.3. Examples of granting roles to service accounts

You can grant roles to service accounts in the same way that you grant roles to a regular user account.

  • You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project:

    $ oc policy add-role-to-user view system:serviceaccount:top-secret:robot
    Tip

    You can alternatively apply the following YAML to add the role:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: view
      namespace: top-secret
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: view
    subjects:
    - kind: ServiceAccount
      name: robot
      namespace: top-secret
  • You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name>

    $ oc policy add-role-to-user <role_name> -z <service_account_name>
    Important

    If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account.

    Tip

    You can alternatively apply the following YAML to add the role:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <rolebinding_name>
      namespace: <current_project_name>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: <role_name>
    subjects:
    - kind: ServiceAccount
      name: <service_account_name>
      namespace: <current_project_name>
  • To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples.

    • For example, to allow all service accounts in all projects to view resources in the my-project project:

      $ oc policy add-role-to-group view system:serviceaccounts -n my-project
      Tip

      You can alternatively apply the following YAML to add the role:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: view
        namespace: my-project
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: view
      subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: Group
        name: system:serviceaccounts
    • To allow all service accounts in the managers project to edit resources in the my-project project:

      $ oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project
      Tip

      You can alternatively apply the following YAML to add the role:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: edit
        namespace: my-project
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: edit
      subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: Group
        name: system:serviceaccounts:managers

Chapter 11. Using service accounts in applications

11.1. Service accounts overview

A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user’s credentials.

When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user’s credentials. For example, service accounts can allow:

  • Replication controllers to make API calls to create or delete pods.
  • Applications inside containers to make API calls for discovery purposes.
  • External applications to make API calls for monitoring or integration purposes.

Each service account’s user name is derived from its project and name:

system:serviceaccount:<project>:<name>

Every service account is also a member of two groups:

GroupDescription

system:serviceaccounts

Includes all service accounts in the system.

system:serviceaccounts:<project>

Includes all service accounts in the specified project.

Each service account automatically contains two secrets:

  • An API token
  • Credentials for the OpenShift Container Registry

The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place.

11.2. Default service accounts

Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project.

11.2.1. Default cluster service accounts

Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project (openshift-infra) at server start, and given the following roles cluster-wide:

Service AccountDescription

replication-controller

Assigned the system:replication-controller role

deployment-controller

Assigned the system:deployment-controller role

build-controller

Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods.

11.2.2. Default project service accounts and roles

Three service accounts are automatically created in each project:

Service AccountUsage

builder

Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry.

deployer

Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project.

default

Used to run all other pods unless they specify a different service account.

All service accounts in a project are given the system:image-puller role, which allows pulling images from any imagestream in the project using the internal container image registry.

11.3. Creating service accounts

You can create a service account in a project and grant it permissions by binding it to a role.

Procedure

  1. Optional: To view the service accounts in the current project:

    $ oc get sa

    Example output

    NAME       SECRETS   AGE
    builder    2         2d
    default    2         2d
    deployer   2         2d

  2. To create a new service account in the current project:

    $ oc create sa <service_account_name> 1
    1
    To create a service account in a different project, specify -n <project_name>.

    Example output

    serviceaccount "robot" created

    Tip

    You can alternatively apply the following YAML to create the service account:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: <service_account_name>
      namespace: <current_project>
  3. Optional: View the secrets for the service account:

    $ oc describe sa robot

    Example output

    Name:		robot
    Namespace:	project1
    Labels:		<none>
    Annotations:	<none>
    
    Image pull secrets:	robot-dockercfg-qzbhb
    
    Mountable secrets: 	robot-token-f4khf
                       	robot-dockercfg-qzbhb
    
    Tokens:            	robot-token-f4khf
                       	robot-token-z8h44

11.4. Using a service account’s credentials externally

You can distribute a service account’s token to external applications that must authenticate to the API.

To pull an image, the authenticated user must have get rights on the requested imagestreams/layers. To push an image, the authenticated user must have update rights on the requested imagestreams/layers.

By default, all service accounts in a project have rights to pull any image in the same project, and the builder service account has rights to push any image in the same project.

Procedure

  1. View the service account’s API token:

    $ oc describe secret <secret_name>

    For example:

    $ oc describe secret robot-token-uzkbh -n top-secret

    Example output

    Name:		robot-token-uzkbh
    Labels:		<none>
    Annotations:	kubernetes.io/service-account.name=robot,kubernetes.io/service-account.uid=49f19e2e-16c6-11e5-afdc-3c970e4b7ffe
    
    Type:	kubernetes.io/service-account-token
    
    Data
    
    token:	eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...

  2. Log in using the token that you obtained:

    $ oc login --token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...

    Example output

    Logged into "https://server:8443" as "system:serviceaccount:top-secret:robot" using the token provided.
    
    You don't have any projects. You can try to create a new project, by running
    
        $ oc new-project <projectname>

  3. Confirm that you logged in as the service account:

    $ oc whoami

    Example output

    system:serviceaccount:top-secret:robot

Chapter 12. Using a service account as an OAuth client

12.1. Service accounts as OAuth clients

You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account’s own namespace:

  • user:info
  • user:check-access
  • role:<any_role>:<service_account_namespace>
  • role:<any_role>:<service_account_namespace>:!

When using a service account as an OAuth client:

  • client_id is system:serviceaccount:<service_account_namespace>:<service_account_name>.
  • client_secret can be any of the API tokens for that service account. For example:

    $ oc sa get-token <service_account_name>
  • To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true.
  • redirect_uri must match an annotation on the service account.

12.1.1. Redirect URIs for service accounts as OAuth clients

Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as:

serviceaccounts.openshift.io/oauth-redirecturi.<name>

In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example:

"serviceaccounts.openshift.io/oauth-redirecturi.first":  "https://example.com"
"serviceaccounts.openshift.io/oauth-redirecturi.second": "https://other.com"

The first and second postfixes in the above example are used to separate the two valid redirect URIs.

In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play.

For example:

"serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"

Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format:

{
  "kind": "OAuthRedirectReference",
  "apiVersion": "v1",
  "reference": {
    "kind": "Route",
    "name": "jenkins"
  }
}

Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins. Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is:

{
  "kind": "OAuthRedirectReference",
  "apiVersion": "v1",
  "reference": {
    "kind": ..., 1
    "name": ..., 2
    "group": ... 3
  }
}
1
kind refers to the type of the object being referenced. Currently, only route is supported.
2
name refers to the name of the object. The object must be in the same namespace as the service account.
3
group refers to the group of the object. Leave this blank, as the group for a route is the empty string.

Both annotation prefixes can be combined to override the data provided by the reference object. For example:

"serviceaccounts.openshift.io/oauth-redirecturi.first":  "custompath"
"serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"

The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com, now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows:

TypeSyntax

Scheme

"https://"

Hostname

"//website.com"

Port

"//:8000"

Path

"examplepath"

Note

Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior.

Any combination of the above syntax can be combined using the following format:

<scheme:>//<hostname><:port>/<path>

The same object can be referenced more than once for more flexibility:

"serviceaccounts.openshift.io/oauth-redirecturi.first":  "custompath"
"serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
"serviceaccounts.openshift.io/oauth-redirecturi.second":  "//:8000"
"serviceaccounts.openshift.io/oauth-redirectreference.second": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"

Assuming that the route named jenkins has an Ingress of https://example.com, then both https://example.com:8000 and https://example.com/custompath are considered valid.

Static and dynamic annotations can be used at the same time to achieve the desired behavior:

"serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
"serviceaccounts.openshift.io/oauth-redirecturi.second": "https://other.com"

Chapter 13. Scoping tokens

13.1. About scoping tokens

You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods.

A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the cluster-admin role can create scoped tokens.

Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules. Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks.

13.1.1. User scopes

User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you:

  • user:full - Allows full read/write access to the API with all of the user’s permissions.
  • user:info - Allows read-only access to information about the user, such as name and groups.
  • user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews. These are the variables where you pass an empty user and groups in your request object.
  • user:list-projects - Allows read-only access to list the projects the user has access to.

13.1.2. Role scope

The role scope allows you to have the same level of access as a given role filtered by namespace.

  • role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace .

    Note

    Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is.

  • role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access.

Chapter 14. Using bound service account tokens

You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM.

14.1. About bound service account tokens

You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API.

14.2. Configuring bound service account tokens using volume projection

You can configure pods to request bound service account tokens by using volume projection.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have created a service account. This procedure assumes that the service account is named build-robot.

Procedure

  1. Optional: Set the service account issuer.

    This step is typically not required if the bound tokens are used only within the cluster.

    Important

    If you change the service account issuer to a custom one, the previous service account issuer is still trusted for the next 24 hours.

    You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes.

    1. Edit the cluster Authentication object:

      $ oc edit authentications cluster
    2. Set the spec.serviceAccountIssuer field to the desired service account issuer value:

      spec:
        serviceAccountIssuer: https://test.default.svc 1
      1
      This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc.
    3. Save the file to apply the changes.
    4. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      AllNodesAtLatestRevision
      3 nodes are at revision 12 1
      1
      In this example, the latest revision number is 12.

      If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again.

      • 3 nodes are at revision 11; 0 nodes have achieved new revision 12
      • 2 nodes are at revision 11; 1 nodes are at revision 12
    5. Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster.

      • Perform a rolling node restart:

        Warning

        It is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster.

        Restart nodes sequentially. Wait for the node to become fully available before restarting the next node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again.

      • Manually restart all pods in the cluster:

        Warning

        Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted.

        Run the following command:

        $ for I in $(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \
              do oc delete pods --all -n $I; \
              sleep 1; \
              done
  2. Configure a pod to use a bound service account token by using volume projection.

    1. Create a file called pod-projected-svc-token.yaml with the following contents:

      apiVersion: v1
      kind: Pod
      metadata:
        name: nginx
      spec:
        containers:
        - image: nginx
          name: nginx
          volumeMounts:
          - mountPath: /var/run/secrets/tokens
            name: vault-token
        serviceAccountName: build-robot 1
        volumes:
        - name: vault-token
          projected:
            sources:
            - serviceAccountToken:
                path: vault-token 2
                expirationSeconds: 7200 3
                audience: vault 4
      1
      A reference to an existing service account.
      2
      The path relative to the mount point of the file to project the token into.
      3
      Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.
      4
      Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server.
    2. Create the pod:

      $ oc create -f pod-projected-svc-token.yaml

      The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration.

  3. The application that uses the bound token must handle reloading the token when it rotates.

    The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours.

Additional resources

Chapter 15. Managing security context constraints

15.1. About security context constraints

Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions include actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.

Security context constraints allow an administrator to control:

  • Whether a pod can run privileged containers with the allowPrivilegedContainer flag.
  • Whether a pod is constrained with the allowPrivilegeEscalation flag.
  • The capabilities that a container can request
  • The use of host directories as volumes
  • The SELinux context of the container
  • The container user ID
  • The use of host namespaces and networking
  • The allocation of an FSGroup that owns the pod volumes
  • The configuration of allowable supplemental groups
  • Whether a container requires write access to its root file system
  • The usage of volume types
  • The configuration of allowable seccomp profiles
Important

Do not set the openshift.io/run-level label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged.

15.1.1. Default security context constraints

The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform.

Important

Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. During upgrades between some versions of OpenShift Container Platform, the values of the default SCCs are reset to the default values, which discards all customizations to those SCCs.

Instead, create new SCCs as needed.

Table 15.1. Default security context constraints
Security context constraintDescription

anyuid

Provides all features of the restricted SCC, but allows users to run with any UID and any GID.

hostaccess

Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace.

Warning

This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution.

hostmount-anyuid

Provides all the features of the restricted SCC, but allows host mounts and running as any UID and any GID on the system.

Warning

This SCC allows host file system access as any UID, including UID 0. Grant with caution.

hostnetwork

Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace.

Warning

If additional workloads are run on control plane hosts, use caution when providing access to hostnetwork. A workload that runs hostnetwork on a control plane host is effectively root on the cluster and must be trusted accordingly.

node-exporter

Used for the Prometheus node exporter.

Warning

This SCC allows host file system access as any UID, including UID 0. Grant with caution.

nonroot

Provides all features of the restricted SCC, but allows users to run with any non-root UID. The user must specify the UID or it must be specified in the manifest of the container runtime.

privileged

Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context.

Warning

This is the most relaxed SCC and should be used only for cluster administration. Grant with caution.

The privileged SCC allows:

  • Users to run privileged pods
  • Pods to mount host directories as volumes
  • Pods to run as any user
  • Pods to run with any MCS label
  • Pods to use the host’s IPC namespace
  • Pods to use the host’s PID namespace
  • Pods to use any FSGroup
  • Pods to use any supplemental group
  • Pods to use any seccomp profiles
  • Pods to request any capabilities
Note

Setting privileged: true in the pod specification does not necessarily select the privileged SCC. The SCC that has allowPrivilegedContainer: true and has the highest prioritization will be chosen if the user has the permissions to use it.

restricted

Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users.

The restricted SCC:

  • Ensures that pods cannot run as privileged
  • Ensures that pods cannot mount host directory volumes
  • Requires that a pod is run as a user in a pre-allocated range of UIDs
  • Requires that a pod is run with a pre-allocated MCS label
  • Allows pods to use any FSGroup
  • Allows pods to use any supplemental group
Note

The restricted SCC is the most restrictive of the SCCs that ship by default with the system. However, you can create a custom SCC that is even more restrictive. For example, you can create an SCC that restricts readOnlyRootFS to true and allowPrivilegeEscalation to false.

15.1.2. Security context constraints settings

Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories:

CategoryDescription

Controlled by a boolean

Fields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified.

Controlled by an allowable set

Fields of this type are checked against the set to ensure their value is allowed.

Controlled by a strategy

Items that have a strategy to generate a value provide:

  • A mechanism to generate the value, and
  • A mechanism to ensure that a specified value falls into the set of allowable values.

CRI-O has the following default list of capabilities that are allowed for each container of a pod:

  • CHOWN
  • DAC_OVERRIDE
  • FSETID
  • FOWNER
  • SETGID
  • SETUID
  • SETPCAP
  • NET_BIND_SERVICE
  • KILL

The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities, defaultAddCapabilities, and requiredDropCapabilities parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container.

Note

You can drop all capabilites from containers by setting the requiredDropCapabilities parameter to ALL.

15.1.3. Security context constraints strategies

RunAsUser

  • MustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser.

    Example MustRunAs snippet

    ...
    runAsUser:
      type: MustRunAs
      uid: <id>
    ...

  • MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range.

    Example MustRunAsRange snippet

    ...
    runAsUser:
      type: MustRunAsRange
      uidRangeMax: <maxvalue>
      uidRangeMin: <minvalue>
    ...

  • MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided.

    Example MustRunAsNonRoot snippet

    ...
    runAsUser:
      type: MustRunAsNonRoot
    ...

  • RunAsAny - No default provided. Allows any runAsUser to be specified.

    Example RunAsAny snippet

    ...
    runAsUser:
      type: RunAsAny
    ...

SELinuxContext

  • MustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions.
  • RunAsAny - No default provided. Allows any seLinuxOptions to be specified.

SupplementalGroups

  • MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges.
  • RunAsAny - No default provided. Allows any supplementalGroups to be specified.

FSGroup

  • MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range.
  • RunAsAny - No default provided. Allows any fsGroup ID to be specified.

15.1.4. Controlling volumes

The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume:

The recommended minimum set of allowed volumes for new SCCs are configMap, downwardAPI, emptyDir, persistentVolumeClaim, secret, and projected.

Note

This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform.

Note

For backwards compatibility, the usage of allowHostDirVolumePlugin overrides settings in the volumes field. For example, if allowHostDirVolumePlugin is set to false but allowed in the volumes field, then the hostPath value will be removed from volumes.

15.1.5. Admission control

Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user.

In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod.

The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account.

Admission uses the following approach to create the final security context for the pod:

  1. Retrieve all SCCs available for use.
  2. Generate field values for security context settings that were not specified on the request.
  3. Validate the final settings against the available constraints.

If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected.

A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated:

Note

These examples are in the context of a strategy using the pre-allocated values.

An FSGroup SCC strategy of MustRunAs

If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is not validated by that SCC and the next SCC is evaluated.

If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.fsGroup, then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail.

A SupplementalGroups SCC strategy of MustRunAs

If the pod specification defines one or more supplementalGroups IDs, then the pod’s IDs must equal one of the IDs in the namespace’s openshift.io/sa.scc.supplemental-groups annotation. Otherwise, the pod is not validated by that SCC and the next SCC is evaluated.

If the SecurityContextConstraints.supplementalGroups field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.supplementalGroups, then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail.

15.1.6. Security context constraints prioritization

Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller.

A priority value of 0 is the lowest possible priority. A nil priority is considered a 0, or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting.

When the complete set of available SCCs is determined, the SCCs are ordered in the following manner:

  1. The highest priority SCCs are ordered first.
  2. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive.
  3. If both the priorities and restrictions are equal, the SCCs are sorted by name.

By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser in the pod’s SecurityContext.

15.2. About pre-allocated security context constraints values

The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod.

The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification:

  1. A RunAsUser strategy of MustRunAsRange with no minimum or maximum set. Admission looks for the openshift.io/sa.scc.uid-range annotation to populate range fields.
  2. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the openshift.io/sa.scc.mcs annotation to populate the level.
  3. A FSGroup strategy of MustRunAs. Admission looks for the openshift.io/sa.scc.supplemental-groups annotation.
  4. A SupplementalGroups strategy of MustRunAs. Admission looks for the openshift.io/sa.scc.supplemental-groups annotation.

During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy:

  1. RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification.
  2. MustRunAs (single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace’s default parameter value also appears in the pod’s groups.
  3. MustRunAsRange and MustRunAs (range-based) strategies provide the minimum value of the range. As with a single value MustRunAs strategy, the namespace’s default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range.
Note

FSGroup and SupplementalGroups strategies fall back to the openshift.io/sa.scc.uid-range annotation if the openshift.io/sa.scc.supplemental-groups annotation does not exist on the namespace. If neither exists, the SCC is not created.

Note

By default, the annotation-based FSGroup strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3, the FSGroup strategy configures itself with a minimum and maximum value of 1. If you want to allow more groups to be accepted for the FSGroup field, you can configure a custom SCC that does not use the annotation.

Note

The openshift.io/sa.scc.supplemental-groups annotation accepts a comma-delimited list of blocks in the format of <start>/<length or <start>-<end>. The openshift.io/sa.scc.uid-range annotation accepts only a single block.

15.3. Example security context constraints

The following examples show the security context constraints (SCC) format and annotations:

Annotated privileged SCC

allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegedContainer: true
allowedCapabilities: 1
- '*'
apiVersion: security.openshift.io/v1
defaultAddCapabilities: [] 2
fsGroup: 3
  type: RunAsAny
groups: 4
- system:cluster-admins
- system:nodes
kind: SecurityContextConstraints
metadata:
  annotations:
    kubernetes.io/description: 'privileged allows access to all privileged and host
      features and the ability to run as any user, any group, any fsGroup, and with
      any SELinux context.  WARNING: this is the most relaxed SCC and should be used
      only for cluster administration. Grant with caution.'
  creationTimestamp: null
  name: privileged
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities: 5
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser: 6
  type: RunAsAny
seLinuxContext: 7
  type: RunAsAny
seccompProfiles:
- '*'
supplementalGroups: 8
  type: RunAsAny
users: 9
- system:serviceaccount:default:registry
- system:serviceaccount:default:router
- system:serviceaccount:openshift-infra:build-controller
volumes:
- '*'

1
A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities.
2
A list of additional capabilities that are added to any pod.
3
The FSGroup strategy, which dictates the allowable values for the security context.
4
The groups that can access this SCC.
5
A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities.
6
The runAsUser strategy type, which dictates the allowable values for the Security Context.
7
The seLinuxContext strategy type, which dictates the allowable values for the Security Context.
8
The supplementalGroups strategy, which dictates the allowable supplemental groups for the Security Context.
9
The users who can access this SCC.

The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted SCC.

Without explicit runAsUser setting

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext: 1
  containers:
  - name: sec-ctx-demo
    image: gcr.io/google-samples/node-hello:1.0

1
When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because restricted SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plugin will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges.

With explicit runAsUser setting

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000 1
  containers:
    - name: sec-ctx-demo
      image: gcr.io/google-samples/node-hello:1.0

1
A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request.

This configuration is valid for SELinux, fsGroup, and Supplemental Groups.

15.4. Creating security context constraints

You can create security context constraints (SCCs) by using the OpenShift CLI (oc).

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster as a user with the cluster-admin role.

Procedure

  1. Define the SCC in a YAML file named scc_admin.yaml:

    SecurityContextConstraints object definition

    kind: SecurityContextConstraints
    apiVersion: security.openshift.io/v1
    metadata:
      name: scc-admin
    allowPrivilegedContainer: true
    runAsUser:
      type: RunAsAny
    seLinuxContext:
      type: RunAsAny
    fsGroup:
      type: RunAsAny
    supplementalGroups:
      type: RunAsAny
    users:
    - my-admin-user
    groups:
    - my-admin-group

    Optionally, you can drop specific capabilities for an SCC by setting the requiredDropCapabilities field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specify ALL. For example, to create an SCC that drops the KILL, MKNOD, and SYS_CHROOT capabilities, add the following to the SCC object:

    requiredDropCapabilities:
    - KILL
    - MKNOD
    - SYS_CHROOT
    Note

    You cannot list a capability in both allowedCapabilities and requiredDropCapabilities.

    CRI-O supports the same list of capability values that are found in the Docker documentation.

  2. Create the SCC by passing in the file:

    $ oc create -f scc_admin.yaml

    Example output

    securitycontextconstraints "scc-admin" created

Verification

  • Verify that the SCC was created:

    $ oc get scc scc-admin

    Example output

    NAME        PRIV      CAPS      SELINUX    RUNASUSER   FSGROUP    SUPGROUP   PRIORITY   READONLYROOTFS   VOLUMES
    scc-admin   true      []        RunAsAny   RunAsAny    RunAsAny   RunAsAny   <none>     false            [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]

15.5. Role-based access to security context constraints

You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope.

Note

You cannot assign a SCC to pods created in one of the default namespaces: default, kube-system, kube-public, openshift-node, openshift-infra, openshift. These namespaces should not be used for running pods or services.

To include access to SCCs for your role, specify the scc resource when creating a role.

$ oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>

This results in the following role definition:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
...
  name: role-name 1
  namespace: namespace 2
...
rules:
- apiGroups:
  - security.openshift.io 3
  resourceNames:
  - scc-name 4
  resources:
  - securitycontextconstraints 5
  verbs: 6
  - use
1
The role’s name.
2
Namespace of the defined role. Defaults to default if not specified.
3
The API group that includes the SecurityContextConstraints resource. Automatically defined when scc is specified as a resource.
4
An example name for an SCC you want to have access.
5
Name of the resource group that allows users to specify SCC names in the resourceNames field.
6
A list of verbs to apply to the role.

A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name.

Note

Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use on SCC resources, including the restricted SCC.

15.6. Reference of security context constraints commands

You can manage security context constraints (SCCs) in your instance as normal API objects using the OpenShift CLI (oc).

Note

You must have cluster-admin privileges to manage SCCs.

15.6.1. Listing security context constraints

To get a current list of SCCs:

$ oc get scc

Example output

NAME               PRIV    CAPS   SELINUX     RUNASUSER          FSGROUP     SUPGROUP    PRIORITY   READONLYROOTFS   VOLUMES
anyuid             false   []     MustRunAs   RunAsAny           RunAsAny    RunAsAny    10         false            [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
hostaccess         false   []     MustRunAs   MustRunAsRange     MustRunAs   RunAsAny    <none>     false            [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret]
hostmount-anyuid   false   []     MustRunAs   RunAsAny           RunAsAny    RunAsAny    <none>     false            [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]
hostnetwork        false   []     MustRunAs   MustRunAsRange     MustRunAs   MustRunAs   <none>     false            [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
node-exporter      false   []     RunAsAny    RunAsAny           RunAsAny    RunAsAny    <none>     false            [*]
nonroot            false   []     MustRunAs   MustRunAsNonRoot   RunAsAny    RunAsAny    <none>     false            [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
privileged         true    [*]    RunAsAny    RunAsAny           RunAsAny    RunAsAny    <none>     false            [*]
restricted         false   []     MustRunAs   MustRunAsRange     MustRunAs   RunAsAny    <none>     false            [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]

15.6.2. Examining security context constraints

You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to.

For example, to examine the restricted SCC:

$ oc describe scc restricted

Example output

Name:					restricted
Priority:				<none>
Access:
  Users:				<none> 1
  Groups:				system:authenticated 2
Settings:
  Allow Privileged:			false
  Default Add Capabilities:		<none>
  Required Drop Capabilities:		KILL,MKNOD,SYS_CHROOT,SETUID,SETGID
  Allowed Capabilities:			<none>
  Allowed Seccomp Profiles:		<none>
  Allowed Volume Types:			configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret
  Allow Host Network:			false
  Allow Host Ports:			false
  Allow Host PID:			false
  Allow Host IPC:			false
  Read Only Root Filesystem:		false
  Run As User Strategy: MustRunAsRange
    UID:				<none>
    UID Range Min:			<none>
    UID Range Max:			<none>
  SELinux Context Strategy: MustRunAs
    User:				<none>
    Role:				<none>
    Type:				<none>
    Level:				<none>
  FSGroup Strategy: MustRunAs
    Ranges:				<none>
  Supplemental Groups Strategy: RunAsAny
    Ranges:				<none>

1
Lists which users and service accounts the SCC is applied to.
2
Lists which groups the SCC is applied to.
Note

To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.

15.6.3. Deleting security context constraints

To delete an SCC:

$ oc delete scc <scc_name>
Note

If you delete a default SCC, it will regenerate when you restart the cluster.

15.6.4. Updating security context constraints

To update an existing SCC:

$ oc edit scc <scc_name>
Note

To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.

Chapter 16. Impersonating the system:admin user

16.1. API impersonation

You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation.

16.2. Impersonating the system:admin user

You can grant a user permission to impersonate system:admin, which grants them cluster administrator permissions.

Procedure

  • To grant a user permission to impersonate system:admin, run the following command:

    $ oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>
    Tip

    You can alternatively apply the following YAML to grant permission to impersonate system:admin:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: <any_valid_name>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: sudoer
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: <username>

16.3. Impersonating the system:admin group

When a system:admin user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2> parameters in the command to impersonate the associated groups.

Procedure

  • To grant a user permission to impersonate a system:admin by impersonating the associated cluster administration groups, run the following command:

    $ oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \
    --as-group=<group1> --as-group=<group2>

Chapter 17. Syncing LDAP groups

As an administrator, you can use groups to manage users, change their permissions, and enhance collaboration. Your organization may have already created user groups and stored them in an LDAP server. OpenShift Container Platform can sync those LDAP records with internal OpenShift Container Platform records, enabling you to manage your groups in one place. OpenShift Container Platform currently supports group sync with LDAP servers using three common schemas for defining group membership: RFC 2307, Active Directory, and augmented Active Directory.

For more information on configuring LDAP, see Configuring an LDAP identity provider.

Note

You must have cluster-admin privileges to sync groups.

17.1. About configuring LDAP sync

Before you can run LDAP sync, you need a sync configuration file. This file contains the following LDAP client configuration details:

  • Configuration for connecting to your LDAP server.
  • Sync configuration options that are dependent on the schema used in your LDAP server.
  • An administrator-defined list of name mappings that maps OpenShift Container Platform group names to groups in your LDAP server.

The format of the configuration file depends upon the schema you are using: RFC 2307, Active Directory, or augmented Active Directory.

LDAP client configuration
The LDAP client configuration section of the configuration defines the connections to your LDAP server.

The LDAP client configuration section of the configuration defines the connections to your LDAP server.

LDAP client configuration

url: ldap://10.0.0.0:389 1
bindDN: cn=admin,dc=example,dc=com 2
bindPassword: <password> 3
insecure: false 4
ca: my-ldap-ca-bundle.crt 5

1
The connection protocol, IP address of the LDAP server hosting your database, and the port to connect to, formatted as scheme://host:port.
2
Optional distinguished name (DN) to use as the Bind DN. OpenShift Container Platform uses this if elevated privilege is required to retrieve entries for the sync operation.
3
Optional password to use to bind. OpenShift Container Platform uses this if elevated privilege is necessary to retrieve entries for the sync operation. This value may also be provided in an environment variable, external file, or encrypted file.
4
When false, secure LDAP (ldaps://) URLs connect using TLS, and insecure LDAP (ldap://) URLs are upgraded to TLS. When true, no TLS connection is made to the server and you cannot use ldaps:// URL schemes.
5
The certificate bundle to use for validating server certificates for the configured URL. If empty, OpenShift Container Platform uses system-trusted roots. This only applies if insecure is set to false.
LDAP query definition
Sync configurations consist of LDAP query definitions for the entries that are required for synchronization. The specific definition of an LDAP query depends on the schema used to store membership information in the LDAP server.

LDAP query definition

baseDN: ou=users,dc=example,dc=com 1
scope: sub 2
derefAliases: never 3
timeout: 0 4
filter: (objectClass=person) 5
pageSize: 0 6

1
The distinguished name (DN) of the branch of the directory where all searches will start from. It is required that you specify the top of your directory tree, but you can also specify a subtree in the directory.
2
The scope of the search. Valid values are base, one, or sub. If this is left undefined, then a scope of sub is assumed. Descriptions of the scope options can be found in the table below.
3
The behavior of the search with respect to aliases in the LDAP tree. Valid values are never, search, base, or always. If this is left undefined, then the default is to always dereference aliases. Descriptions of the dereferencing behaviors can be found in the table below.
4
The time limit allowed for the search by the client, in seconds. A value of 0 imposes no client-side limit.
5
A valid LDAP search filter. If this is left undefined, then the default is (objectClass=*).
6
The optional maximum size of response pages from the server, measured in LDAP entries. If set to 0, no size restrictions will be made on pages of responses. Setting paging sizes is necessary when queries return more entries than the client or server allow by default.
Table 17.1. LDAP search scope options
LDAP search scopeDescription

base

Only consider the object specified by the base DN given for the query.

one

Consider all of the objects on the same level in the tree as the base DN for the query.

sub

Consider the entire subtree rooted at the base DN given for the query.

Table 17.2. LDAP dereferencing behaviors
Dereferencing behaviorDescription

never

Never dereference any aliases found in the LDAP tree.

search

Only dereference aliases found while searching.

base

Only dereference aliases while finding the base object.

always

Always dereference all aliases found in the LDAP tree.

User-defined name mapping
A user-defined name mapping explicitly maps the names of OpenShift Container Platform groups to unique identifiers that find groups on your LDAP server. The mapping uses normal YAML syntax. A user-defined mapping can contain an entry for every group in your LDAP server or only a subset of those groups. If there are groups on the LDAP server that do not have a user-defined name mapping, the default behavior during sync is to use the attribute specified as the OpenShift Container Platform group’s name.

User-defined name mapping

groupUIDNameMapping:
  "cn=group1,ou=groups,dc=example,dc=com": firstgroup
  "cn=group2,ou=groups,dc=example,dc=com": secondgroup
  "cn=group3,ou=groups,dc=example,dc=com": thirdgroup

17.1.1. About the RFC 2307 configuration file

The RFC 2307 schema requires you to provide an LDAP query definition for both user and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform records.

For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships:

Note

If using user-defined name mappings, your configuration file will differ.

LDAP sync configuration that uses RFC 2307 schema: rfc2307_config.yaml

kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389 1
insecure: false 2
rfc2307:
    groupsQuery:
        baseDN: "ou=groups,dc=example,dc=com"
        scope: sub
        derefAliases: never
        pageSize: 0
    groupUIDAttribute: dn 3
    groupNameAttributes: [ cn ] 4
    groupMembershipAttributes: [ member ] 5
    usersQuery:
        baseDN: "ou=users,dc=example,dc=com"
        scope: sub
        derefAliases: never
        pageSize: 0
    userUIDAttribute: dn 6
    userNameAttributes: [ mail ] 7
    tolerateMemberNotFoundErrors: false
    tolerateMemberOutOfScopeErrors: false

1
The IP address and host of the LDAP server where this group’s record is stored.
2
When false, secure LDAP (ldaps://) URLs connect using TLS, and insecure LDAP (ldap://) URLs are upgraded to TLS. When true, no TLS connection is made to the server and you cannot use ldaps:// URL schemes.
3
The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method.
4
The attribute to use as the name of the group.
5
The attribute on the group that stores the membership information.
6
The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method.
7
The attribute to use as the name of the user in the OpenShift Container Platform group record.

17.1.2. About the Active Directory configuration file

The Active Directory schema requires you to provide an LDAP query definition for user entries, as well as the attributes to represent them with in the internal OpenShift Container Platform group records.

For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, but define the name of the group by the name of the group on the LDAP server. The following configuration file creates these relationships:

LDAP sync configuration that uses Active Directory schema: active_directory_config.yaml

kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
activeDirectory:
    usersQuery:
        baseDN: "ou=users,dc=example,dc=com"
        scope: sub
        derefAliases: never
        filter: (objectclass=person)
        pageSize: 0
    userNameAttributes: [ mail ] 1
    groupMembershipAttributes: [ memberOf ] 2

1
The attribute to use as the name of the user in the OpenShift Container Platform group record.
2
The attribute on the user that stores the membership information.

17.1.3. About the augmented Active Directory configuration file

The augmented Active Directory schema requires you to provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform group records.

For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships.

LDAP sync configuration that uses augmented Active Directory schema: augmented_active_directory_config.yaml

kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
augmentedActiveDirectory:
    groupsQuery:
        baseDN: "ou=groups,dc=example,dc=com"
        scope: sub
        derefAliases: never
        pageSize: 0
    groupUIDAttribute: dn 1
    groupNameAttributes: [ cn ] 2
    usersQuery:
        baseDN: "ou=users,dc=example,dc=com"
        scope: sub
        derefAliases: never
        filter: (objectclass=person)
        pageSize: 0
    userNameAttributes: [ mail ] 3
    groupMembershipAttributes: [ memberOf ] 4

1
The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method.
2
The attribute to use as the name of the group.
3
The attribute to use as the name of the user in the OpenShift Container Platform group record.
4
The attribute on the user that stores the membership information.

17.2. Running LDAP sync

Once you have created a sync configuration file, you can begin to sync. OpenShift Container Platform allows administrators to perform a number of different sync types with the same server.

17.2.1. Syncing the LDAP server with OpenShift Container Platform

You can sync all groups from the LDAP server with OpenShift Container Platform.

Prerequisites

  • Create a sync configuration file.

Procedure

  • To sync all groups from the LDAP server with OpenShift Container Platform:

    $ oc adm groups sync --sync-config=config.yaml --confirm
    Note

    By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records.

17.2.2. Syncing OpenShift Container Platform groups with the LDAP server

You can sync all groups already in OpenShift Container Platform that correspond to groups in the LDAP server specified in the configuration file.

Prerequisites

  • Create a sync configuration file.

Procedure

  • To sync OpenShift Container Platform groups with the LDAP server:

    $ oc adm groups sync --type=openshift --sync-config=config.yaml --confirm
    Note

    By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records.

17.2.3. Syncing subgroups from the LDAP server with OpenShift Container Platform

You can sync a subset of LDAP groups with OpenShift Container Platform using whitelist files, blacklist files, or both.

Note

You can use any combination of blacklist files, whitelist files, or whitelist literals. Whitelist and blacklist files must contain one unique group identifier per line, and you can include whitelist literals directly in the command itself. These guidelines apply to groups found on LDAP servers as well as groups already present in OpenShift Container Platform.

Prerequisites

  • Create a sync configuration file.

Procedure

  • To sync a subset of LDAP groups with OpenShift Container Platform, use any the following commands:

    $ oc adm groups sync --whitelist=<whitelist_file> \
                       --sync-config=config.yaml      \
                       --confirm
    $ oc adm groups sync --blacklist=<blacklist_file> \
                       --sync-config=config.yaml      \
                       --confirm
    $ oc adm groups sync <group_unique_identifier>    \
                       --sync-config=config.yaml      \
                       --confirm
    $ oc adm groups sync <group_unique_identifier>  \
                       --whitelist=<whitelist_file> \
                       --blacklist=<blacklist_file> \
                       --sync-config=config.yaml    \
                       --confirm
    $ oc adm groups sync --type=openshift           \
                       --whitelist=<whitelist_file> \
                       --sync-config=config.yaml    \
                       --confirm
    Note

    By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records.

17.3. Running a group pruning job

An administrator can also choose to remove groups from OpenShift Container Platform records if the records on the LDAP server that created them are no longer present. The prune job will accept the same sync configuration file and whitelists or blacklists as used for the sync job.

For example:

$ oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm
$ oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm
$ oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm

17.4. Automatically syncing LDAP groups

You can automatically sync LDAP groups on a periodic basis by configuring a cron job.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have configured an LDAP identity provider (IDP).

    This procedure assumes that you created an LDAP secret named ldap-secret and a config map named ca-config-map.

Procedure

  1. Create a project where the cron job will run:

    $ oc new-project ldap-sync 1
    1
    This procedure uses a project called ldap-sync.
  2. Locate the secret and config map that you created when configuring the LDAP identity provider and copy them to this new project.

    The secret and config map exist in the openshift-config project and must be copied to the new ldap-sync project.

  3. Define a service account:

    Example ldap-sync-service-account.yaml

    kind: ServiceAccount
    apiVersion: v1
    metadata:
      name: ldap-group-syncer
      namespace: ldap-sync

  4. Create the service account:

    $ oc create -f ldap-sync-service-account.yaml
  5. Define a cluster role:

    Example ldap-sync-cluster-role.yaml

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: ldap-group-syncer
    rules:
      - apiGroups:
          - ''
          - user.openshift.io
        resources:
          - groups
        verbs:
          - get
          - list
          - create
          - update

  6. Create the cluster role:

    $ oc create -f ldap-sync-cluster-role.yaml
  7. Define a cluster role binding to bind the cluster role to the service account:

    Example ldap-sync-cluster-role-binding.yaml

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: ldap-group-syncer
    subjects:
      - kind: ServiceAccount
        name: ldap-group-syncer              1
        namespace: ldap-sync
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: ldap-group-syncer                2

    1
    Reference to the service account created earlier in this procedure.
    2
    Reference to the cluster role created earlier in this procedure.
  8. Create the cluster role binding:

    $ oc create -f ldap-sync-cluster-role-binding.yaml
  9. Define a config map that specifies the sync configuration file:

    Example ldap-sync-config-map.yaml

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ldap-group-syncer
      namespace: ldap-sync
    data:
      sync.yaml: |                                 1
        kind: LDAPSyncConfig
        apiVersion: v1
        url: ldaps://10.0.0.0:389                  2
        insecure: false
        bindDN: cn=admin,dc=example,dc=com         3
        bindPassword:
          file: "/etc/secrets/bindPassword"
        ca: /etc/ldap-ca/ca.crt
        rfc2307:                                   4
          groupsQuery:
            baseDN: "ou=groups,dc=example,dc=com"  5
            scope: sub
            filter: "(objectClass=groupOfMembers)"
            derefAliases: never
            pageSize: 0
          groupUIDAttribute: dn
          groupNameAttributes: [ cn ]
          groupMembershipAttributes: [ member ]
          usersQuery:
            baseDN: "ou=users,dc=example,dc=com"   6
            scope: sub
            derefAliases: never
            pageSize: 0
          userUIDAttribute: dn
          userNameAttributes: [ uid ]
          tolerateMemberNotFoundErrors: false
          tolerateMemberOutOfScopeErrors: false

    1
    Define the sync configuration file.
    2
    Specify the URL.
    3
    Specify the bindDN.
    4
    This example uses the RFC2307 schema; adjust values as necessary. You can also use a different schema.
    5
    Specify the baseDN for groupsQuery.
    6
    Specify the baseDN for usersQuery.
  10. Create the config map:

    $ oc create -f ldap-sync-config-map.yaml
  11. Define a cron job:

    Example ldap-sync-cron-job.yaml

    kind: CronJob
    apiVersion: batch/v1
    metadata:
      name: ldap-group-syncer
      namespace: ldap-sync
    spec:                                                                                1
      schedule: "*/30 * * * *"                                                           2
      concurrencyPolicy: Forbid
      jobTemplate:
        spec:
          backoffLimit: 0
          ttlSecondsAfterFinished: 1800                                                  3
          template:
            spec:
              containers:
                - name: ldap-group-sync
                  image: "registry.redhat.io/openshift4/ose-cli:latest"
                  command:
                    - "/bin/bash"
                    - "-c"
                    - "oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm" 4
                  volumeMounts:
                    - mountPath: "/etc/config"
                      name: "ldap-sync-volume"
                    - mountPath: "/etc/secrets"
                      name: "ldap-bind-password"
                    - mountPath: "/etc/ldap-ca"
                      name: "ldap-ca"
              volumes:
                - name: "ldap-sync-volume"
                  configMap:
                    name: "ldap-group-syncer"
                - name: "ldap-bind-password"
                  secret:
                    secretName: "ldap-secret"                                            5
                - name: "ldap-ca"
                  configMap:
                    name: "ca-config-map"                                                6
              restartPolicy: "Never"
              terminationGracePeriodSeconds: 30
              activeDeadlineSeconds: 500
              dnsPolicy: "ClusterFirst"
              serviceAccountName: "ldap-group-syncer"

    1
    Configure the settings for the cron job. See "Creating cron jobs" for more information on cron job settings.
    2
    The schedule for the job specified in cron format. This example cron job runs every 30 minutes. Adjust the frequency as necessary, making sure to take into account how long the sync takes to run.
    3
    How long, in seconds, to keep finished jobs. This should match the period of the job schedule in order to clean old failed jobs and prevent unnecessary alerts. For more information, see TTL-after-finished Controller in the Kubernetes documentation.
    4
    The LDAP sync command for the cron job to run. Passes in the sync configuration file that was defined in the config map.
    5
    This secret was created when the LDAP IDP was configured.
    6
    This config map was created when the LDAP IDP was configured.
  12. Create the cron job:

    $ oc create -f ldap-sync-cron-job.yaml

17.5. LDAP group sync examples

This section contains examples for the RFC 2307, Active Directory, and augmented Active Directory schemas.

Note

These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups.

17.5.1. Syncing groups using the RFC 2307 schema

For the RFC 2307 schema, the following examples synchronize a group named admins that has two members: Jane and Jim. The examples explain:

  • How the group and users are added to the LDAP server.
  • What the resulting group record in OpenShift Container Platform will be after synchronization.
Note

These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups.

In the RFC 2307 schema, both users (Jane and Jim) and groups exist on the LDAP server as first-class entries, and group membership is stored in attributes on the group. The following snippet of ldif defines the users and group for this schema:

LDAP entries that use RFC 2307 schema: rfc2307.ldif

  dn: ou=users,dc=example,dc=com
  objectClass: organizationalUnit
  ou: users
  dn: cn=Jane,ou=users,dc=example,dc=com
  objectClass: person
  objectClass: organizationalPerson
  objectClass: inetOrgPerson
  cn: Jane
  sn: Smith
  displayName: Jane Smith
  mail: jane.smith@example.com
  dn: cn=Jim,ou=users,dc=example,dc=com
  objectClass: person
  objectClass: organizationalPerson
  objectClass: inetOrgPerson
  cn: Jim
  sn: Adams
  displayName: Jim Adams
  mail: jim.adams@example.com
  dn: ou=groups,dc=example,dc=com
  objectClass: organizationalUnit
  ou: groups
  dn: cn=admins,ou=groups,dc=example,dc=com 1
  objectClass: groupOfNames
  cn: admins
  owner: cn=admin,dc=example,dc=com
  description: System Administrators
  member: cn=Jane,ou=users,dc=example,dc=com 2
  member: cn=Jim,ou=users,dc=example,dc=com

1
The group is a first-class entry in the LDAP server.
2
Members of a group are listed with an identifying reference as attributes on the group.

Prerequisites

  • Create the configuration file.

Procedure

  • Run the sync with the rfc2307_config.yaml file:

    $ oc adm groups sync --sync-config=rfc2307_config.yaml --confirm

    OpenShift Container Platform creates the following group record as a result of the above sync operation:

    OpenShift Container Platform group created by using the rfc2307_config.yaml file

    apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      annotations:
        openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
        openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2
        openshift.io/ldap.url: LDAP_SERVER_IP:389 3
      creationTimestamp:
      name: admins 4
    users: 5
    - jane.smith@example.com
    - jim.adams@example.com

    1
    The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format.
    2
    The unique identifier for the group on the LDAP server.
    3
    The IP address and host of the LDAP server where this group’s record is stored.
    4
    The name of the group as specified by the sync file.
    5
    The users that are members of the group, named as specified by the sync file.

17.5.2. Syncing groups using the RFC2307 schema with user-defined name mappings

When syncing groups with user-defined name mappings, the configuration file changes to contain these mappings as shown below.

LDAP sync configuration that uses RFC 2307 schema with user-defined name mappings: rfc2307_config_user_defined.yaml

kind: LDAPSyncConfig
apiVersion: v1
groupUIDNameMapping:
  "cn=admins,ou=groups,dc=example,dc=com": Administrators 1
rfc2307:
    groupsQuery:
        baseDN: "ou=groups,dc=example,dc=com"
        scope: sub
        derefAliases: never
        pageSize: 0
    groupUIDAttribute: dn 2
    groupNameAttributes: [ cn ] 3
    groupMembershipAttributes: [ member ]
    usersQuery:
        baseDN: "ou=users,dc=example,dc=com"
        scope: sub
        derefAliases: never
        pageSize: 0
    userUIDAttribute: dn 4
    userNameAttributes: [ mail ]
    tolerateMemberNotFoundErrors: false
    tolerateMemberOutOfScopeErrors: false

1
The user-defined name mapping.
2
The unique identifier attribute that is used for the keys in the user-defined name mapping. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method.
3
The attribute to name OpenShift Container Platform groups with if their unique identifier is not in the user-defined name mapping.
4
The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method.

Prerequisites

  • Create the configuration file.

Procedure

  • Run the sync with the rfc2307_config_user_defined.yaml file:

    $ oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm

    OpenShift Container Platform creates the following group record as a result of the above sync operation:

    OpenShift Container Platform group created by using the rfc2307_config_user_defined.yaml file

    apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      annotations:
        openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400
        openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com
        openshift.io/ldap.url: LDAP_SERVER_IP:389
      creationTimestamp:
      name: Administrators 1
    users:
    - jane.smith@example.com
    - jim.adams@example.com

    1
    The name of the group as specified by the user-defined name mapping.

17.5.3. Syncing groups using RFC 2307 with user-defined error tolerances

By default, if the groups being synced contain members whose entries are outside of the scope defined in the member query, the group sync fails with an error:

Error determining LDAP group membership for "<group>": membership lookup for user "<user>" in group "<group>" failed because of "search for entry with dn="<user-dn>" would search outside of the base dn specified (dn="<base-dn>")".

This often indicates a misconfigured baseDN in the usersQuery field. However, in cases where the baseDN intentionally does not contain some of the members of the group, setting tolerateMemberOutOfScopeErrors: true allows the group sync to continue. Out of scope members will be ignored.

Similarly, when the group sync process fails to locate a member for a group, it fails outright with errors:

Error determining LDAP group membership for "<group>": membership lookup for user "<user>" in group "<group>" failed because of "search for entry with base dn="<user-dn>" refers to a non-existent entry".
Error determining LDAP group membership for "<group>": membership lookup for user "<user>" in group "<group>" failed because of "search for entry with base dn="<user-dn>" and filter "<filter>" did not return any results".

This often indicates a misconfigured usersQuery field. However, in cases where the group contains member entries that are known to be missing, setting tolerateMemberNotFoundErrors: true allows the group sync to continue. Problematic members will be ignored.

Warning

Enabling error tolerances for the LDAP group sync causes the sync process to ignore problematic member entries. If the LDAP group sync is not configured correctly, this could result in synced OpenShift Container Platform groups missing members.

LDAP entries that use RFC 2307 schema with problematic group membership: rfc2307_problematic_users.ldif

  dn: ou=users,dc=example,dc=com
  objectClass: organizationalUnit
  ou: users
  dn: cn=Jane,ou=users,dc=example,dc=com
  objectClass: person
  objectClass: organizationalPerson
  objectClass: inetOrgPerson
  cn: Jane
  sn: Smith
  displayName: Jane Smith
  mail: jane.smith@example.com
  dn: cn=Jim,ou=users,dc=example,dc=com
  objectClass: person
  objectClass: organizationalPerson
  objectClass: inetOrgPerson
  cn: Jim
  sn: Adams
  displayName: Jim Adams
  mail: jim.adams@example.com
  dn: ou=groups,dc=example,dc=com
  objectClass: organizationalUnit
  ou: groups
  dn: cn=admins,ou=groups,dc=example,dc=com
  objectClass: groupOfNames
  cn: admins
  owner: cn=admin,dc=example,dc=com
  description: System Administrators
  member: cn=Jane,ou=users,dc=example,dc=com
  member: cn=Jim,ou=users,dc=example,dc=com
  member: cn=INVALID,ou=users,dc=example,dc=com 1
  member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2

1
A member that does not exist on the LDAP server.
2
A member that may exist, but is not under the baseDN in the user query for the sync job.

To tolerate the errors in the above example, the following additions to your sync configuration file must be made:

LDAP sync configuration that uses RFC 2307 schema tolerating errors: rfc2307_config_tolerating.yaml

kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
rfc2307:
    groupsQuery:
        baseDN: "ou=groups,dc=example,dc=com"
        scope: sub
        derefAliases: never
    groupUIDAttribute: dn
    groupNameAttributes: [ cn ]
    groupMembershipAttributes: [ member ]
    usersQuery:
        baseDN: "ou=users,dc=example,dc=com"
        scope: sub
        derefAliases: never
    userUIDAttribute: dn 1
    userNameAttributes: [ mail ]
    tolerateMemberNotFoundErrors: true 2
    tolerateMemberOutOfScopeErrors: true 3

1
The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method.
2
When true, the sync job tolerates groups for which some members were not found, and members whose LDAP entries are not found are ignored. The default behavior for the sync job is to fail if a member of a group is not found.
3
When true, the sync job tolerates groups for which some members are outside the user scope given in the usersQuery base DN, and members outside the member query scope are ignored. The default behavior for the sync job is to fail if a member of a group is out of scope.

Prerequisites

  • Create the configuration file.

Procedure

  • Run the sync with the rfc2307_config_tolerating.yaml file:

    $ oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm

    OpenShift Container Platform creates the following group record as a result of the above sync operation:

    OpenShift Container Platform group created by using the rfc2307_config.yaml file

    apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      annotations:
        openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400
        openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com
        openshift.io/ldap.url: LDAP_SERVER_IP:389
      creationTimestamp:
      name: admins
    users: 1
    - jane.smith@example.com
    - jim.adams@example.com

    1
    The users that are members of the group, as specified by the sync file. Members for which lookup encountered tolerated errors are absent.

17.5.4. Syncing groups using the Active Directory schema

In the Active Directory schema, both users (Jane and Jim) exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema:

LDAP entries that use Active Directory schema: active_directory.ldif

dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users

dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: jane.smith@example.com
memberOf: admins 1

dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jim
sn: Adams
displayName: Jim Adams
mail: jim.adams@example.com
memberOf: admins

1
The user’s group memberships are listed as attributes on the user, and the group does not exist as an entry on the server. The memberOf attribute does not have to be a literal attribute on the user; in some LDAP servers, it is created during search and returned to the client, but not committed to the database.

Prerequisites

  • Create the configuration file.

Procedure

  • Run the sync with the active_directory_config.yaml file:

    $ oc adm groups sync --sync-config=active_directory_config.yaml --confirm

    OpenShift Container Platform creates the following group record as a result of the above sync operation:

    OpenShift Container Platform group created by using the active_directory_config.yaml file

    apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      annotations:
        openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
        openshift.io/ldap.uid: admins 2
        openshift.io/ldap.url: LDAP_SERVER_IP:389 3
      creationTimestamp:
      name: admins 4
    users: 5
    - jane.smith@example.com
    - jim.adams@example.com

    1
    The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format.
    2
    The unique identifier for the group on the LDAP server.
    3
    The IP address and host of the LDAP server where this group’s record is stored.
    4
    The name of the group as listed in the LDAP server.
    5
    The users that are members of the group, named as specified by the sync file.

17.5.5. Syncing groups using the augmented Active Directory schema

In the augmented Active Directory schema, both users (Jane and Jim) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema:

LDAP entries that use augmented Active Directory schema: augmented_active_directory.ldif

dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users

dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: jane.smith@example.com
memberOf: cn=admins,ou=groups,dc=example,dc=com 1

dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jim
sn: Adams
displayName: Jim Adams
mail: jim.adams@example.com
memberOf: cn=admins,ou=groups,dc=example,dc=com

dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups

dn: cn=admins,ou=groups,dc=example,dc=com 2
objectClass: groupOfNames
cn: admins
owner: cn=admin,dc=example,dc=com
description: System Administrators
member: cn=Jane,ou=users,dc=example,dc=com
member: cn=Jim,ou=users,dc=example,dc=com

1
The user’s group memberships are listed as attributes on the user.
2
The group is a first-class entry on the LDAP server.

Prerequisites

  • Create the configuration file.

Procedure

  • Run the sync with the augmented_active_directory_config.yaml file:

    $ oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm

    OpenShift Container Platform creates the following group record as a result of the above sync operation:

    OpenShift Container Platform group created by using the augmented_active_directory_config.yaml file

    apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      annotations:
        openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
        openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2
        openshift.io/ldap.url: LDAP_SERVER_IP:389 3
      creationTimestamp:
      name: admins 4
    users: 5
    - jane.smith@example.com
    - jim.adams@example.com

    1
    The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format.
    2
    The unique identifier for the group on the LDAP server.
    3
    The IP address and host of the LDAP server where this group’s record is stored.
    4
    The name of the group as specified by the sync file.
    5
    The users that are members of the group, named as specified by the sync file.
17.5.5.1. LDAP nested membership sync example

Groups in OpenShift Container Platform do not nest. The LDAP server must flatten group membership before the data can be consumed. Microsoft’s Active Directory Server supports this feature via the LDAP_MATCHING_RULE_IN_CHAIN rule, which has the OID 1.2.840.113556.1.4.1941. Furthermore, only explicitly whitelisted groups can be synced when using this matching rule.

This section has an example for the augmented Active Directory schema, which synchronizes a group named admins that has one user Jane and one group otheradmins as members. The otheradmins group has one user member: Jim. This example explains:

  • How the group and users are added to the LDAP server.
  • What the LDAP sync configuration file looks like.
  • What the resulting group record in OpenShift Container Platform will be after synchronization.

In the augmented Active Directory schema, both users (Jane and Jim) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user or the group. The following snippet of ldif defines the users and groups for this schema:

LDAP entries that use augmented Active Directory schema with nested members: augmented_active_directory_nested.ldif

dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users

dn: cn=Jane,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jane
sn: Smith
displayName: Jane Smith
mail: jane.smith@example.com
memberOf: cn=admins,ou=groups,dc=example,dc=com 1

dn: cn=Jim,ou=users,dc=example,dc=com
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: testPerson
cn: Jim
sn: Adams
displayName: Jim Adams
mail: jim.adams@example.com
memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2

dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups

dn: cn=admins,ou=groups,dc=example,dc=com 3
objectClass: group
cn: admins
owner: cn=admin,dc=example,dc=com
description: System Administrators
member: cn=Jane,ou=users,dc=example,dc=com
member: cn=otheradmins,ou=groups,dc=example,dc=com

dn: cn=otheradmins,ou=groups,dc=example,dc=com 4
objectClass: group
cn: otheradmins
owner: cn=admin,dc=example,dc=com
description: Other System Administrators
memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6
member: cn=Jim,ou=users,dc=example,dc=com

1 2 5
The user’s and group’s memberships are listed as attributes on the object.
3 4
The groups are first-class entries on the LDAP server.
6
The otheradmins group is a member of the admins group.

When syncing nested groups with Active Directory, you must provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform group records. Furthermore, certain changes are required in this configuration:

  • The oc adm groups sync command must explicitly whitelist groups.
  • The user’s groupMembershipAttributes must include "memberOf:1.2.840.113556.1.4.1941:" to comply with the LDAP_MATCHING_RULE_IN_CHAIN rule.
  • The groupUIDAttribute must be set to dn.
  • The groupsQuery:

    • Must not set filter.
    • Must set a valid derefAliases.
    • Should not set baseDN as that value is ignored.
    • Should not set scope as that value is ignored.

For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships:

LDAP sync configuration that uses augmented Active Directory schema with nested members: augmented_active_directory_config_nested.yaml

kind: LDAPSyncConfig
apiVersion: v1
url: ldap://LDAP_SERVICE_IP:389
augmentedActiveDirectory:
    groupsQuery: 1
        derefAliases: never
        pageSize: 0
    groupUIDAttribute: dn 2
    groupNameAttributes: [ cn ] 3
    usersQuery:
        baseDN: "ou=users,dc=example,dc=com"
        scope: sub
        derefAliases: never
        filter: (objectclass=person)
        pageSize: 0
    userNameAttributes: [ mail ] 4
    groupMembershipAttributes: [ "memberOf:1.2.840.113556.1.4.1941:" ] 5

1
groupsQuery filters cannot be specified. The groupsQuery base DN and scope values are ignored. groupsQuery must set a valid derefAliases.
2
The attribute that uniquely identifies a group on the LDAP server. It must be set to dn.
3
The attribute to use as the name of the group.
4
The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations.
5
The attribute on the user that stores the membership information. Note the use of LDAP_MATCHING_RULE_IN_CHAIN.

Prerequisites

  • Create the configuration file.

Procedure

  • Run the sync with the augmented_active_directory_config_nested.yaml file:

    $ oc adm groups sync \
        'cn=admins,ou=groups,dc=example,dc=com' \
        --sync-config=augmented_active_directory_config_nested.yaml \
        --confirm
    Note

    You must explicitly whitelist the cn=admins,ou=groups,dc=example,dc=com group.

    OpenShift Container Platform creates the following group record as a result of the above sync operation:

    OpenShift Container Platform group created by using the augmented_active_directory_config_nested.yaml file

    apiVersion: user.openshift.io/v1
    kind: Group
    metadata:
      annotations:
        openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1
        openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2
        openshift.io/ldap.url: LDAP_SERVER_IP:389 3
      creationTimestamp:
      name: admins 4
    users: 5
    - jane.smith@example.com
    - jim.adams@example.com

    1
    The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format.
    2
    The unique identifier for the group on the LDAP server.
    3
    The IP address and host of the LDAP server where this group’s record is stored.
    4
    The name of the group as specified by the sync file.
    5
    The users that are members of the group, named as specified by the sync file. Note that members of nested groups are included since the group membership was flattened by the Microsoft Active Directory Server.

17.6. LDAP sync configuration specification

The object specification for the configuration file is below. Note that the different schema objects have different fields. For example, v1.ActiveDirectoryConfig has no groupsQuery field whereas v1.RFC2307Config and v1.AugmentedActiveDirectoryConfig both do.

Important

There is no support for binary attributes. All attribute data coming from the LDAP server must be in the format of a UTF-8 encoded string. For example, never use a binary attribute, such as objectGUID, as an ID attribute. You must use string attributes, such as sAMAccountName or userPrincipalName, instead.

17.6.1. v1.LDAPSyncConfig

LDAPSyncConfig holds the necessary configuration options to define an LDAP group sync.

NameDescriptionSchema

kind

String value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#types-kinds

string

apiVersion

Defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#resources

string

url

Host is the scheme, host and port of the LDAP server to connect to: scheme://host:port

string

bindDN

Optional DN to bind to the LDAP server with.

string

bindPassword

Optional password to bind with during the search phase.

v1.StringSource

insecure

If true, indicates the connection should not use TLS. If false, ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830. If you set insecure to true, you cannot use ldaps:// URL schemes.

boolean

ca

Optional trusted certificate authority bundle to use when making requests to the server. If empty, the default system roots are used.

string

groupUIDNameMapping

Optional direct mapping of LDAP group UIDs to OpenShift Container Platform group names.

object

rfc2307

Holds the configuration for extracting data from an LDAP server set up in a fashion similar to RFC2307: first-class group and user entries, with group membership determined by a multi-valued attribute on the group entry listing its members.

v1.RFC2307Config

activeDirectory

Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory: first-class user entries, with group membership determined by a multi-valued attribute on members listing groups they are a member of.

v1.ActiveDirectoryConfig

augmentedActiveDirectory

Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory as described above, with one addition: first-class group entries exist and are used to hold metadata but not group membership.

v1.AugmentedActiveDirectoryConfig

17.6.2. v1.StringSource

StringSource allows specifying a string inline, or externally via environment variable or file. When it contains only a string value, it marshals to a simple JSON string.

NameDescriptionSchema

value

Specifies the cleartext value, or an encrypted value if keyFile is specified.

string

env

Specifies an environment variable containing the cleartext value, or an encrypted value if the keyFile is specified.

string

file

References a file containing the cleartext value, or an encrypted value if a keyFile is specified.

string

keyFile

References a file containing the key to use to decrypt the value.

string

17.6.3. v1.LDAPQuery

LDAPQuery holds the options necessary to build an LDAP query.

NameDescriptionSchema

baseDN

DN of the branch of the directory where all searches should start from.

string

scope

The optional scope of the search. Can be base: only the base object, one: all objects on the base level, sub: the entire subtree. Defaults to sub if not set.

string

derefAliases

The optional behavior of the search with regards to aliases. Can be never: never dereference aliases, search: only dereference in searching, base: only dereference in finding the base object, always: always dereference. Defaults to always if not set.

string

timeout

Holds the limit of time in seconds that any request to the server can remain outstanding before the wait for a response is given up. If this is 0, no client-side limit is imposed.

integer

filter

A valid LDAP search filter that retrieves all relevant entries from the LDAP server with the base DN.

string

pageSize

Maximum preferred page size, measured in LDAP entries. A page size of 0 means no paging will be done.

integer

17.6.4. v1.RFC2307Config

RFC2307Config holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the RFC2307 schema.

NameDescriptionSchema

groupsQuery

Holds the template for an LDAP query that returns group entries.

v1.LDAPQuery

groupUIDAttribute

Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. (ldapGroupUID)

string

groupNameAttributes

Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Container Platform group.

string array

groupMembershipAttributes

Defines which attributes on an LDAP group entry will be interpreted as its members. The values contained in those attributes must be queryable by your UserUIDAttribute.

string array

usersQuery

Holds the template for an LDAP query that returns user entries.

v1.LDAPQuery

userUIDAttribute

Defines which attribute on an LDAP user entry will be interpreted as its unique identifier. It must correspond to values that will be found from the GroupMembershipAttributes.

string

userNameAttributes

Defines which attributes on an LDAP user entry will be used, in order, as its OpenShift Container Platform user name. The first attribute with a non-empty value is used. This should match your PreferredUsername setting for your LDAPPasswordIdentityProvider. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations.

string array

tolerateMemberNotFoundErrors

Determines the behavior of the LDAP sync job when missing user entries are encountered. If true, an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If false, the LDAP sync job will fail if a query for users doesn’t find any. The default value is false. Misconfigured LDAP sync jobs with this flag set to true can cause group membership to be removed, so it is recommended to use this flag with caution.

boolean

tolerateMemberOutOfScopeErrors

Determines the behavior of the LDAP sync job when out-of-scope user entries are encountered. If true, an LDAP query for a user that falls outside of the base DN given for the all user query will be tolerated and only an error will be logged. If false, the LDAP sync job will fail if a user query would search outside of the base DN specified by the all user query. Misconfigured LDAP sync jobs with this flag set to true can result in groups missing users, so it is recommended to use this flag with caution.

boolean

17.6.5. v1.ActiveDirectoryConfig

ActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the Active Directory schema.

NameDescriptionSchema

usersQuery

Holds the template for an LDAP query that returns user entries.

v1.LDAPQuery

userNameAttributes

Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Container Platform user name. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations.

string array

groupMembershipAttributes

Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of.

string array

17.6.6. v1.AugmentedActiveDirectoryConfig

AugmentedActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the augmented Active Directory schema.

NameDescriptionSchema

usersQuery

Holds the template for an LDAP query that returns user entries.

v1.LDAPQuery

userNameAttributes

Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Container Platform user name. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations.

string array

groupMembershipAttributes

Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of.

string array

groupsQuery

Holds the template for an LDAP query that returns group entries.

v1.LDAPQuery

groupUIDAttribute

Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. (ldapGroupUID)

string

groupNameAttributes

Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Container Platform group.

string array

Chapter 18. Managing cloud provider credentials

18.1. About the Cloud Credential Operator

The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run.

By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string (""), the CCO operates in its default mode.

18.1.1. Modes

By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint, passthrough, or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers.

  • Mint: In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required.
  • Passthrough: In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials.
  • Manual: In manual mode, a user manages cloud credentials instead of the CCO.

    • Manual with AWS Security Token Service: In manual mode, you can configure an AWS cluster to use Amazon Web Services Security Token Service (AWS STS). With this configuration, the CCO uses temporary credentials for different components.
    • Manual with GCP Workload Identity: In manual mode, you can configure a GCP cluster to use GCP Workload Identity. With this configuration, the CCO uses temporary credentials for different components.
Table 18.1. CCO mode support matrix
Cloud providerMintPassthroughManual

Alibaba Cloud

  

X

Amazon Web Services (AWS)

X

X

X

Microsoft Azure

 

X [1]

X

Google Cloud Platform (GCP)

X

X

X

IBM Cloud

  

X

Red Hat OpenStack Platform (RHOSP)

 

X

 

Red Hat Virtualization (RHV)

 

X

 

VMware vSphere

 

X

 
  1. Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub.

18.1.2. Determining the Cloud Credential Operator mode

For platforms that support using the CCO in multiple modes, you can determine what mode the CCO is configured to use by using the web console or the CLI.

Figure 18.1. Determining the CCO configuration

Decision tree showing how to determine the configured CCO credentials mode for your cluster.
18.1.2.1. Determining the Cloud Credential Operator mode by using the web console

You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console.

Note

Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator permissions.

Procedure

  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.
  2. Navigate to AdministrationCluster Settings.
  3. On the Cluster Settings page, select the Configuration tab.
  4. Under Configuration resource, select CloudCredential.
  5. On the CloudCredential details page, select the YAML tab.
  6. In the YAML block, check the value of spec.credentialsMode. The following values are possible, though not all are supported on all platforms:

    • '': The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation.
    • Mint: The CCO is operating in mint mode.
    • Passthrough: The CCO is operating in passthrough mode.
    • Manual: The CCO is operating in manual mode.
    Important

    To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '', Mint, or Manual, you must investigate further.

    AWS and GCP clusters support using mint mode with the root secret deleted.

    An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object.

  7. AWS or GCP clusters that use the default ('') only: To determine whether the cluster is operating in mint or passthrough mode, inspect the annotations on the cluster root secret:

    1. Navigate to WorkloadsSecrets and look for the root secret for your cloud provider.

      Note

      Ensure that the Project dropdown is set to All Projects.

      PlatformSecret name

      AWS

      aws-creds

      GCP

      gcp-credentials

    2. To view the CCO mode that the cluster is using, click 1 annotation under Annotations, and check the value field. The following values are possible:

      • Mint: The CCO is operating in mint mode.
      • Passthrough: The CCO is operating in passthrough mode.

      If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret.

  8. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to WorkloadsSecrets and look for the root secret for your cloud provider.

    Note

    Ensure that the Project dropdown is set to All Projects.

    PlatformSecret name

    AWS

    aws-creds

    GCP

    gcp-credentials

    • If you see one of these values, your cluster is using mint or passthrough mode with the root secret present.
    • If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed.
  9. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values.

    1. Navigate to AdministrationCluster Settings.
    2. On the Cluster Settings page, select the Configuration tab.
    3. Under Configuration resource, select Authentication.
    4. On the Authentication details page, select the YAML tab.
    5. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter.

      • A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility.
      • An empty value ('') indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility.
18.1.2.2. Determining the Cloud Credential Operator mode by using the CLI

You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI.

Note

Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Log in to oc on the cluster as a user with the cluster-admin role.
  2. To determine the mode that the CCO is configured to use, enter the following command:

    $ oc get cloudcredentials cluster \
      -o=jsonpath={.spec.credentialsMode}

    The following output values are possible, though not all are supported on all platforms:

    • '': The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation.
    • Mint: The CCO is operating in mint mode.
    • Passthrough: The CCO is operating in passthrough mode.
    • Manual: The CCO is operating in manual mode.
    Important

    To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '', Mint, or Manual, you must investigate further.

    AWS and GCP clusters support using mint mode with the root secret deleted.

    An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object.

  3. AWS or GCP clusters that use the default ('') only: To determine whether the cluster is operating in mint or passthrough mode, run the following command:

    $ oc get secret <secret_name> \
      -n kube-system \
      -o jsonpath \
      --template '{ .metadata.annotations }'

    where <secret_name> is aws-creds for AWS or gcp-credentials for GCP.

    This command displays the value of the .metadata.annotations parameter in the cluster root secret object. The following output values are possible:

    • Mint: The CCO is operating in mint mode.
    • Passthrough: The CCO is operating in passthrough mode.

    If your cluster uses mint mode, you can also determine whether the cluster is operating without the root secret.

  4. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command:

    $ oc get secret <secret_name> \
      -n=kube-system

    where <secret_name> is aws-creds for AWS or gcp-credentials for GCP.

    If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster.

  5. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command:

    $ oc get authentication cluster \
      -o jsonpath \
      --template='{ .spec.serviceAccountIssuer }'

    This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object.

    • An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility.
    • An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility.

18.1.3. Default behavior

For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs.

By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs.

If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installer fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered.

If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials.

To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can manually create IAM for AWS, Azure, and GCP.

18.1.4. Additional resources

18.2. Using mint mode

Mint mode is supported for Amazon Web Services (AWS) and Google Cloud Platform (GCP).

Mint mode is the default mode on the platforms for which it is supported. In this mode, the Cloud Credential Operator (CCO) uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required.

If the credential is not removed after installation, it is stored and used by the CCO to process CredentialsRequest CRs for components in the cluster and create new credentials for each with only the specific permissions that are required. The continuous reconciliation of cloud credentials in mint mode allows actions that require additional credentials or permissions, such as upgrading, to proceed.

Mint mode stores the administrator-level credential in the cluster kube-system namespace. If this approach does not meet the security requirements of your organization, see Alternatives to storing administrator-level secrets in the kube-system project for AWS or GCP.

18.2.1. Mint mode permissions requirements

When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user.

18.2.1.1. Amazon Web Services (AWS) permissions

The credential you provide for mint mode in AWS must have the following permissions:

  • iam:CreateAccessKey
  • iam:CreateUser
  • iam:DeleteAccessKey
  • iam:DeleteUser
  • iam:DeleteUserPolicy
  • iam:GetUser
  • iam:GetUserPolicy
  • iam:ListAccessKeys
  • iam:PutUserPolicy
  • iam:TagUser
  • iam:SimulatePrincipalPolicy
18.2.1.2. Google Cloud Platform (GCP) permissions

The credential you provide for mint mode in GCP must have the following permissions:

  • resourcemanager.projects.get
  • serviceusage.services.list
  • iam.serviceAccountKeys.create
  • iam.serviceAccountKeys.delete
  • iam.serviceAccounts.create
  • iam.serviceAccounts.delete
  • iam.serviceAccounts.get
  • iam.roles.get
  • resourcemanager.projects.getIamPolicy
  • resourcemanager.projects.setIamPolicy

18.2.2. Admin credentials root secret format

Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode, or by copying the credentials root secret with passthrough mode.

The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.

Amazon Web Services (AWS) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: aws-creds
stringData:
  aws_access_key_id: <base64-encoded_access_key_id>
  aws_secret_access_key: <base64-encoded_secret_access_key>

Google Cloud Platform (GCP) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: gcp-credentials
stringData:
  service_account.json: <base64-encoded_service_account>

18.2.3. Mint mode with removal or rotation of the administrator-level credential

Currently, this mode is only supported on AWS and GCP.

In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation.

The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired.

Note

Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.

The administrator-level credential is not stored in the cluster permanently.

Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade.

18.2.3.1. Rotating cloud provider credentials manually

If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.

The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.

Prerequisites

  • Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:

    • For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported.
  • You have changed the credentials that are used to interface with your cloud provider.
  • The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.

Procedure

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. In the table on the Secrets page, find the root secret for your cloud provider.

    PlatformSecret name

    AWS

    aws-creds

    GCP

    gcp-credentials

  3. Click the Options menu kebab in the same row as the secret and select Edit Secret.
  4. Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
  5. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
  6. Delete each component secret that is referenced by the individual CredentialsRequest objects.

    1. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role.
    2. Get the names and namespaces of all referenced component secrets:

      $ oc -n openshift-cloud-credential-operator get CredentialsRequest \
        -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'

      where <provider_spec> is the corresponding value for your cloud provider:

      • AWS: AWSProviderSpec
      • GCP: GCPProviderSpec

      Partial example output for AWS

      {
        "name": "ebs-cloud-credentials",
        "namespace": "openshift-cluster-csi-drivers"
      }
      {
        "name": "cloud-credential-operator-iam-ro-creds",
        "namespace": "openshift-cloud-credential-operator"
      }

    3. Delete each of the referenced component secrets:

      $ oc delete secret <secret_name> \1
        -n <secret_namespace> 2
      1
      Specify the name of a secret.
      2
      Specify the namespace that contains the secret.

      Example deletion of an AWS secret

      $ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers

      You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.

Verification

To verify that the credentials have changed:

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. Verify that the contents of the Value field or fields have changed.
18.2.3.2. Removing cloud provider credentials

After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades.

Note

Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.

Prerequisites

  • Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.

Procedure

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. In the table on the Secrets page, find the root secret for your cloud provider.

    PlatformSecret name

    AWS

    aws-creds

    GCP

    gcp-credentials

  3. Click the Options menu kebab in the same row as the secret and select Delete Secret.

18.2.4. Additional resources

18.3. Using passthrough mode

Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere.

In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode.

Note

Manual mode is the only supported CCO configuration for Microsoft Azure Stack Hub.

18.3.1. Passthrough mode permissions requirements

When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for.

18.3.1.1. Amazon Web Services (AWS) permissions

The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing.

To locate the CredentialsRequest CRs that are required, see Manually creating IAM for AWS.

18.3.1.2. Microsoft Azure permissions

The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing.

To locate the CredentialsRequest CRs that are required, see Manually creating IAM for Azure.

18.3.1.3. Google Cloud Platform (GCP) permissions

The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing.

To locate the CredentialsRequest CRs that are required, see Manually creating IAM for GCP.

18.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions

To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role.

18.3.1.5. Red Hat Virtualization (RHV) permissions

To install an OpenShift Container Platform cluster on RHV, the CCO requires a credential with the following privileges:

  • DiskOperator
  • DiskCreator
  • UserTemplateBasedVm
  • TemplateOwner
  • TemplateCreator
  • ClusterAdmin on the specific cluster that is targeted for OpenShift Container Platform deployment
18.3.1.6. VMware vSphere permissions

To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges:

Table 18.2. Required vSphere privileges
CategoryPrivileges

Datastore

Allocate space

Folder

Create folder, Delete folder

vSphere Tagging

All privileges

Network

Assign network

Resource

Assign virtual machine to resource pool

Profile-driven storage

All privileges

vApp

All privileges

Virtual machine

All privileges

18.3.2. Admin credentials root secret format

Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode, or by copying the credentials root secret with passthrough mode.

The format for the secret varies by cloud, and is also used for each CredentialsRequest secret.

Amazon Web Services (AWS) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: aws-creds
stringData:
  aws_access_key_id: <base64-encoded_access_key_id>
  aws_secret_access_key: <base64-encoded_secret_access_key>

Microsoft Azure secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: azure-credentials
stringData:
  azure_subscription_id: <base64-encoded_subscription_id>
  azure_client_id: <base64-encoded_client_id>
  azure_client_secret: <base64-encoded_client_secret>
  azure_tenant_id: <base64-encoded_tenant_id>
  azure_resource_prefix: <base64-encoded_resource_prefix>
  azure_resourcegroup: <base64-encoded_resource_group>
  azure_region: <base64-encoded_region>

On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster’s infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests:

$ cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r

Example output

mycluster-2mpcn

This value would be used in the secret data as follows:

azure_resource_prefix: mycluster-2mpcn
azure_resourcegroup: mycluster-2mpcn-rg

Google Cloud Platform (GCP) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: gcp-credentials
stringData:
  service_account.json: <base64-encoded_service_account>

Red Hat OpenStack Platform (RHOSP) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: openstack-credentials
data:
  clouds.yaml: <base64-encoded_cloud_creds>
  clouds.conf: <base64-encoded_cloud_creds_init>

Red Hat Virtualization (RHV) secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: ovirt-credentials
data:
  ovirt_url: <base64-encoded_url>
  ovirt_username: <base64-encoded_username>
  ovirt_password: <base64-encoded_password>
  ovirt_insecure: <base64-encoded_insecure>
  ovirt_ca_bundle: <base64-encoded_ca_bundle>

VMware vSphere secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: vsphere-creds
data:
 vsphere.openshift.example.com.username: <base64-encoded_username>
 vsphere.openshift.example.com.password: <base64-encoded_password>

18.3.3. Passthrough mode credential maintenance

If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating IAM for AWS, Azure, or GCP.

18.3.3.1. Rotating cloud provider credentials manually

If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.

The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.

Prerequisites

  • Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:

    • For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported.
  • You have changed the credentials that are used to interface with your cloud provider.
  • The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.

Procedure

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. In the table on the Secrets page, find the root secret for your cloud provider.

    PlatformSecret name

    AWS

    aws-creds

    Azure

    azure-credentials

    GCP

    gcp-credentials

    RHOSP

    openstack-credentials

    RHV

    ovirt-credentials

    VMware vSphere

    vsphere-creds

  3. Click the Options menu kebab in the same row as the secret and select Edit Secret.
  4. Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.
  5. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.
  6. If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials.

    Note

    If the vSphere CSI Driver Operator is enabled, this step is not required.

    To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command:

    $ oc patch kubecontrollermanager cluster \
      -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \
      --type=merge

    While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true. To view the status, run the following command:

    $ oc get co kube-controller-manager

Verification

To verify that the credentials have changed:

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.
  2. Verify that the contents of the Value field or fields have changed.

Additional resources

18.3.4. Reducing permissions after installation

When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer.

After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using.

To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating IAM for AWS, Azure, or GCP.

18.3.5. Additional resources

18.4. Using manual mode

Manual mode is supported for Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, IBM Cloud, and Google Cloud Platform (GCP).

In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster’s cloud provider.

Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade.

For information about configuring your cloud provider to use manual mode, see the manual credentials management options for your cloud provider:

18.4.1. Manual mode with cloud credentials created and managed outside of the cluster

An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. With this configuration, the CCO uses temporary credentials for different components.

For more information, see Using manual mode with Amazon Web Services Security Token Service or Using manual mode with GCP Workload Identity.

18.4.2. Updating cloud provider resources with manually maintained credentials

Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.

Procedure

  1. Extract and examine the CredentialsRequest custom resource for the new release.

    The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud.

  2. Update the manually maintained credentials on your cluster:

    • Create new secrets for any CredentialsRequest custom resources that are added by the new release image.
    • If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required.

Next steps

  • Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade.
18.4.2.1. Indicating that the cluster is ready to upgrade

The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default.

Prerequisites

  • For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility (ccoctl).
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Log in to oc on the cluster as a user with the cluster-admin role.
  2. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command:

    $ oc edit cloudcredential cluster

    Text to add

    ...
      metadata:
        annotations:
          cloudcredential.openshift.io/upgradeable-to: <version_number>
    ...

    Where <version_number> is the version that you are upgrading to, in the format x.y.z. For example, use 4.10.2 for OpenShift Container Platform 4.10.2.

    It may take several minutes after adding the annotation for the upgradeable status to change.

Verification

  1. In the Administrator perspective of the web console, navigate to AdministrationCluster Settings.
  2. To view the CCO status details, click cloud-credential in the Cluster Operators list.

    • If the Upgradeable status in the Conditions section is False, verify that the upgradeable-to annotation is free of typographical errors.
  3. When the Upgradeable status in the Conditions section is True, begin the OpenShift Container Platform upgrade.

18.4.3. Additional resources

18.5. Using manual mode with Amazon Web Services Security Token Service

Manual mode with STS is supported for Amazon Web Services (AWS).

Note

This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature.

18.5.1. About manual mode with AWS Security Token Service

In manual mode with STS, the individual OpenShift Container Platform cluster components use AWS Security Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls.

Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider, combined with AWS IAM roles. OpenShift Container Platform signs service account tokens that are trusted by AWS IAM, and can be projected into a pod and used for authentication. Tokens are refreshed after one hour.

Figure 18.2. STS authentication flow

Detailed authentication flow between AWS and the cluster when using AWS STS

Using manual mode with STS changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components.

AWS secret format using long-lived credentials

apiVersion: v1
kind: Secret
metadata:
  namespace: <target-namespace> 1
  name: <target-secret-name> 2
data:
  aws_access_key_id: <base64-encoded-access-key-id>
  aws_secret_access_key: <base64-encoded-secret-access-key>

1
The namespace for the component.
2
The name of the component secret.

AWS secret format with STS

apiVersion: v1
kind: Secret
metadata:
  namespace: <target-namespace> 1
  name: <target-secret-name> 2
stringData:
  credentials: |-
    [default]
    sts_regional_endpoints = regional
    role_name: <operator-role-name> 3
    web_identity_token_file: <path-to-token> 4

1
The namespace for the component.
2
The name of the component secret.
3
The IAM role for the component.
4
The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components.

18.5.2. Installing an OpenShift Container Platform cluster configured for manual mode with STS

To install a cluster that is configured to use the Cloud Credential Operator (CCO) in manual mode with STS:

Note

Because the cluster is operating in manual mode when using STS, it is not able to create new credentials for components with the permissions that they require. When upgrading to a different minor version of OpenShift Container Platform, there are often new AWS permission requirements. Before upgrading a cluster that is using STS, the cluster administrator must manually ensure that the AWS permissions are sufficient for existing components and available to any new components.

18.5.2.1. Configuring the Cloud Credential Operator utility

To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.

Note

The ccoctl utility is a Linux binary that must run in a Linux environment.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift CLI (oc).
  • You have created an AWS account for the ccoctl utility to use with the following permissions:

    Table 18.3. Required AWS permissions
    Permission typeRequired permissions

    iam permissions

    • iam:CreateOpenIDConnectProvider
    • iam:CreateRole
    • iam:DeleteOpenIDConnectProvider
    • iam:DeleteRole
    • iam:DeleteRolePolicy
    • iam:GetOpenIDConnectProvider
    • iam:GetRole
    • iam:GetUser
    • iam:ListOpenIDConnectProviders
    • iam:ListRolePolicies
    • iam:ListRoles
    • iam:PutRolePolicy
    • iam:TagOpenIDConnectProvider
    • iam:TagRole

    s3 permissions

    • s3:CreateBucket
    • s3:DeleteBucket
    • s3:DeleteObject
    • s3:GetBucketAcl
    • s3:GetBucketTagging
    • s3:GetObject
    • s3:GetObjectAcl
    • s3:GetObjectTagging
    • s3:ListBucket
    • s3:PutBucketAcl
    • s3:PutBucketPolicy
    • s3:PutBucketPublicAccessBlock
    • s3:PutBucketTagging
    • s3:PutObject
    • s3:PutObjectAcl
    • s3:PutObjectTagging

    cloudfront permissions

    • cloudfront:ListCloudFrontOriginAccessIdentities
    • cloudfront:ListDistributions
    • cloudfront:ListTagsForResource

    If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions:

    • cloudfront:CreateCloudFrontOriginAccessIdentity
    • cloudfront:CreateDistribution
    • cloudfront:DeleteCloudFrontOriginAccessIdentity
    • cloudfront:DeleteDistribution
    • cloudfront:GetCloudFrontOriginAccessIdentity
    • cloudfront:GetCloudFrontOriginAccessIdentityConfig
    • cloudfront:GetDistribution
    • cloudfront:TagResource
    • cloudfront:UpdateDistribution
    Note

    These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command.

Procedure

  1. Obtain the OpenShift Container Platform release image:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  2. Get the CCO container image from the OpenShift Container Platform release image:

    $ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE)
    Note

    Ensure that the architecture of the $RELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool.

  3. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image:

    $ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
  4. Change the permissions to make ccoctl executable:

    $ chmod 775 ccoctl

Verification

  • To verify that ccoctl is ready to use, display the help file:

    $ ccoctl --help

    Output of ccoctl --help

    OpenShift credentials provisioning tool
    
    Usage:
      ccoctl [command]
    
    Available Commands:
      alibabacloud Manage credentials objects for alibaba cloud
      aws          Manage credentials objects for AWS cloud
      gcp          Manage credentials objects for Google cloud
      help         Help about any command
      ibmcloud     Manage credentials objects for IBM Cloud
    
    Flags:
      -h, --help   help for ccoctl
    
    Use "ccoctl [command] --help" for more information about a command.

18.5.2.2. Creating AWS resources with the Cloud Credential Operator utility

You can use the CCO utility (ccoctl) to create the required AWS resources individually, or with a single command.

18.5.2.2.1. Creating AWS resources individually

If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. For example, this option might be useful for an organization that shares the responsibility for creating these resources among different users or departments.

Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically.

Note

By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.

Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters.

Prerequisites

  • Extract and prepare the ccoctl binary.

Procedure

  1. Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster:

    $ ccoctl aws create-key-pair

    Example output:

    2021/04/13 11:01:02 Generating RSA keypair
    2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private
    2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public
    2021/04/13 11:01:03 Copying signing key for use by installer

    where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files.

    This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key.

  2. Create an OpenID Connect identity provider and S3 bucket on AWS:

    $ ccoctl aws create-identity-provider \
    --name=<name> \
    --region=<aws_region> \
    --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public

    where:

    • <name> is the name used to tag any cloud resources that are created for tracking.
    • <aws-region> is the AWS region in which cloud resources will be created.
    • <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated.

    Example output:

    2021/04/13 11:16:09 Bucket <name>-oidc created
    2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated
    2021/04/13 11:16:10 Reading public key
    2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated
    2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com

    where 02-openid-configuration is a discovery document and 03-keys.json is a JSON web key set file.

    This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml. This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens.

  3. Create IAM roles for each component in the cluster.

    1. Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image:

      $ oc adm release extract --credentials-requests \
      --cloud=aws \
      --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1
      --from=quay.io/<path_to>/ocp-release:<version>
      1
      credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist.
    2. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory:

      $ ccoctl aws create-iam-roles \
      --name=<name> \
      --region=<aws_region> \
      --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \
      --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
      Note

      For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter.

      If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter.

      For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image.

Verification

  • To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory:

    $ ll <path_to_ccoctl_output_dir>/manifests

    Example output:

    total 24
    -rw-------. 1 <user> <user> 161 Apr 13 11:42 cluster-authentication-02-config.yaml
    -rw-------. 1 <user> <user> 379 Apr 13 11:59 openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
    -rw-------. 1 <user> <user> 353 Apr 13 11:59 openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
    -rw-------. 1 <user> <user> 355 Apr 13 11:59 openshift-image-registry-installer-cloud-credentials-credentials.yaml
    -rw-------. 1 <user> <user> 339 Apr 13 11:59 openshift-ingress-operator-cloud-credentials-credentials.yaml
    -rw-------. 1 <user> <user> 337 Apr 13 11:59 openshift-machine-api-aws-cloud-credentials-credentials.yaml

You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.

18.5.2.2.2. Creating AWS resources with a single command

If you do not need to review the JSON files that the ccoctl tool creates before modifying AWS resources, and if the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources.

Otherwise, you can create the AWS resources individually.

Note

By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.

Prerequisites

You must have:

  • Extracted and prepared the ccoctl binary.

Procedure

  1. Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
    --credentials-requests \
    --cloud=aws \
    --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1
    --from=quay.io/<path_to>/ocp-release:<version>
    1
    credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist.
    Note

    This command can take a few moments to run.

  2. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components.

    Example credrequests directory contents for OpenShift Container Platform 4.12 on AWS

    0000_30_machine-api-operator_00_credentials-request.yaml 1
    0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2
    0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3
    0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4
    0000_50_cluster-network-operator_02-cncc-credentials.yaml 5
    0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6

    1
    The Machine API Operator CR is required.
    2
    The Cloud Credential Operator CR is required.
    3
    The Image Registry Operator CR is required.
    4
    The Ingress Operator CR is required.
    5
    The Network Operator CR is required.
    6
    The Storage Operator CR is an optional component and might be disabled in your cluster.
  3. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory:

    $ ccoctl aws create-all \
      --name=<name> \1
      --region=<aws_region> \2
      --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \3
      --output-dir=<path_to_ccoctl_output_dir> \4
      --create-private-s3-bucket 5
    1
    Specify the name used to tag any cloud resources that are created for tracking.
    2
    Specify the AWS region in which cloud resources will be created.
    3
    Specify the directory containing the files for the component CredentialsRequest objects.
    4
    Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.
    5
    Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter.
    Note

    If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter.

Verification

  • To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory:

    $ ls <path_to_ccoctl_output_dir>/manifests

    Example output:

    cluster-authentication-02-config.yaml
    openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
    openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
    openshift-image-registry-installer-cloud-credentials-credentials.yaml
    openshift-ingress-operator-cloud-credentials-credentials.yaml
    openshift-machine-api-aws-cloud-credentials-credentials.yaml

You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.

18.5.2.3. Running the installer

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform release image.

Procedure

  1. Change to the directory that contains the installation program and create the install-config.yaml file:

    $ openshift-install create install-config --dir <installation_directory>

    where <installation_directory> is the directory in which the installation program creates files.

  2. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual.

    Example install-config.yaml configuration file

    apiVersion: v1
    baseDomain: cluster1.example.com
    credentialsMode: Manual 1
    compute:
    - architecture: amd64
      hyperthreading: Enabled

    1
    This line is added to set the credentialsMode parameter to Manual.
  3. Create the required OpenShift Container Platform installation manifests:

    $ openshift-install create manifests
  4. Copy the manifests that ccoctl generated to the manifests directory that the installation program created:

    $ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
  5. Copy the private key that the ccoctl generated in the tls directory to the installation directory:

    $ cp -a /<path_to_ccoctl_output_dir>/tls .
  6. Run the OpenShift Container Platform installer:

    $ ./openshift-install create cluster
18.5.2.4. Verifying the installation
  1. Connect to the OpenShift Container Platform cluster.
  2. Verify that the cluster does not have root credentials:

    $ oc get secrets -n kube-system aws-creds

    The output should look similar to:

    Error from server (NotFound): secrets "aws-creds" not found
  3. Verify that the components are assuming the IAM roles that are specified in the secret manifests, instead of using credentials that are created by the CCO:

    Example command with the Image Registry Operator

    $ oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode

    The output should show the role and web identity token that are used by the component and look similar to:

    Example output with the Image Registry Operator

    [default]
    role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials
    web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token

18.5.3. Additional resources

18.6. Using manual mode with GCP Workload Identity

Manual mode with GCP Workload Identity is supported for Google Cloud Platform (GCP).

Note

This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature.

18.6.1. About manual mode with GCP Workload Identity

In manual mode with GCP Workload Identity, the individual OpenShift Container Platform cluster components can impersonate IAM service accounts using short-term, limited-privilege credentials.

Requests for new and refreshed credentials are automated by using an appropriately configured OpenID Connect (OIDC) identity provider, combined with IAM service accounts. OpenShift Container Platform signs service account tokens that are trusted by GCP, and can be projected into a pod and used for authentication. Tokens are refreshed after one hour by default.

Figure 18.3. Workload Identity authentication flow

Detailed authentication flow between GCP and the cluster when using GCP Workload Identity

Using manual mode with GCP Workload Identity changes the content of the GCP credentials that are provided to individual OpenShift Container Platform components.

GCP secret format

apiVersion: v1
kind: Secret
metadata:
  namespace: <target_namespace> 1
  name: <target_secret_name> 2
data:
  service_account.json: <service_account> 3

1
The namespace for the component.
2
The name of the component secret.
3
The Base64 encoded service account.

Content of the Base64 encoded service_account.json file using long-lived credentials

{
   "type": "service_account", 1
   "project_id": "<project_id>",
   "private_key_id": "<private_key_id>",
   "private_key": "<private_key>", 2
   "client_email": "<client_email_address>",
   "client_id": "<client_id>",
   "auth_uri": "https://accounts.google.com/o/oauth2/auth",
   "token_uri": "https://oauth2.googleapis.com/token",
   "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
   "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>"
}

1
The credential type is service_account.
2
The private RSA key that is used to authenticate to GCP. This key must be kept secure and is not rotated.

Content of the Base64 encoded service_account.json file using GCP Workload Identity

{
   "type": "external_account", 1
   "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider", 2
   "subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
   "token_url": "https://sts.googleapis.com/v1/token",
   "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken", 3
   "credential_source": {
      "file": "<path_to_token>", 4
      "format": {
         "type": "text"
      }
   }
}

1
The credential type is external_account.
2
The target audience is the GCP Workload Identity provider.
3
The resource URL of the service account that can be impersonated with these credentials.
4
The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components.
Note

In OpenShift Container Platform 4.10.8, image registry support for using GCP Workload Identity was removed due to the discovery of an adverse impact to the image registry. To use the image registry on an OpenShift Container Platform 4.10.8 cluster that uses Workload Identity, you must configure the image registry to use long-lived credentials instead.

With OpenShift Container Platform 4.10.21, support for using GCP Workload Identity with the image registry is restored. For more information about the status of this feature between OpenShift Container Platform 4.10.8 and 4.10.20, see the related Knowledgebase article.

18.6.2. Installing an OpenShift Container Platform cluster configured for manual mode with GCP Workload Identity

To install a cluster that is configured to use the Cloud Credential Operator (CCO) in manual mode with GCP Workload Identity:

Note

Because the cluster is operating in manual mode when using GCP Workload Identity, it is not able to create new credentials for components with the permissions that they require. When upgrading to a different minor version of OpenShift Container Platform, there are often new GCP permission requirements. Before upgrading a cluster that is using GCP Workload Identity, the cluster administrator must manually ensure that the GCP permissions are sufficient for existing components and available to any new components.

18.6.2.1. Configuring the Cloud Credential Operator utility

To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.

Note

The ccoctl utility is a Linux binary that must run in a Linux environment.

Prerequisites

  • You have access to an OpenShift Container Platform account with cluster administrator access.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Obtain the OpenShift Container Platform release image:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  2. Get the CCO container image from the OpenShift Container Platform release image:

    $ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE)
    Note

    Ensure that the architecture of the $RELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool.

  3. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image:

    $ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
  4. Change the permissions to make ccoctl executable:

    $ chmod 775 ccoctl

Verification

  • To verify that ccoctl is ready to use, display the help file:

    $ ccoctl --help

    Output of ccoctl --help

    OpenShift credentials provisioning tool
    
    Usage:
      ccoctl [command]
    
    Available Commands:
      alibabacloud Manage credentials objects for alibaba cloud
      aws          Manage credentials objects for AWS cloud
      gcp          Manage credentials objects for Google cloud
      help         Help about any command
      ibmcloud     Manage credentials objects for IBM Cloud
    
    Flags:
      -h, --help   help for ccoctl
    
    Use "ccoctl [command] --help" for more information about a command.

18.6.2.2. Creating GCP resources with the Cloud Credential Operator utility

You can use the ccoctl gcp create-all command to automate the creation of GCP resources.

Note

By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.

Prerequisites

You must have:

  • Extracted and prepared the ccoctl binary.

Procedure

  1. Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
    --credentials-requests \
    --cloud=gcp \
    --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1
    --quay.io/<path_to>/ocp-release:<version>
    1
    credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist.
    Note

    This command can take a few moments to run.

  2. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components.

    Example credrequests directory contents for OpenShift Container Platform 4.12 on GCP

    0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1
    0000_30_machine-api-operator_00_credentials-request.yaml 2
    0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3
    0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4
    0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5
    0000_50_cluster-network-operator_02-cncc-credentials.yaml 6
    0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7

    1
    The Cloud Controller Manager Operator CR is required.
    2
    The Machine API Operator CR is required.
    3
    The Cloud Credential Operator CR is required.
    4
    The Image Registry Operator CR is required.
    5
    The Ingress Operator CR is required.
    6
    The Network Operator CR is required.
    7
    The Storage Operator CR is an optional component and might be disabled in your cluster.
  3. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory:

    $ ccoctl gcp create-all \
    --name=<name> \
    --region=<gcp_region> \
    --project=<gcp_project_id> \
    --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests

    where:

    • <name> is the user-defined name for all created GCP resources used for tracking.
    • <gcp_region> is the GCP region in which cloud resources will be created.
    • <gcp_project_id> is the GCP project ID in which cloud resources will be created.
    • <path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the files of CredentialsRequest manifests to create GCP service accounts.
    Note

    If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter.

Verification

  • To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory:

    $ ls <path_to_ccoctl_output_dir>/manifests

You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts.

18.6.2.3. Running the installer

Prerequisites

  • Configure an account with the cloud platform that hosts your cluster.
  • Obtain the OpenShift Container Platform release image.

Procedure

  1. Change to the directory that contains the installation program and create the install-config.yaml file:

    $ openshift-install create install-config --dir <installation_directory>

    where <installation_directory> is the directory in which the installation program creates files.

  2. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual.

    Example install-config.yaml configuration file

    apiVersion: v1
    baseDomain: cluster1.example.com
    credentialsMode: Manual 1
    compute:
    - architecture: amd64
      hyperthreading: Enabled

    1
    This line is added to set the credentialsMode parameter to Manual.
  3. Create the required OpenShift Container Platform installation manifests:

    $ openshift-install create manifests
  4. Copy the manifests that ccoctl generated to the manifests directory that the installation program created:

    $ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
  5. Copy the private key that the ccoctl generated in the tls directory to the installation directory:

    $ cp -a /<path_to_ccoctl_output_dir>/tls .
  6. Run the OpenShift Container Platform installer:

    $ ./openshift-install create cluster
18.6.2.4. Verifying the installation
  1. Connect to the OpenShift Container Platform cluster.
  2. Verify that the cluster does not have root credentials:

    $ oc get secrets -n kube-system gcp-credentials

    The output should look similar to:

    Error from server (NotFound): secrets "gcp-credentials" not found
  3. Verify that the components are assuming the service accounts that are specified in the secret manifests, instead of using credentials that are created by the CCO:

    Example command with the Image Registry Operator

    $ oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r '.data."service_account.json"' | base64 -d

    The output should show the role and web identity token that are used by the component and look similar to:

    Example output with the Image Registry Operator

    {
       "type": "external_account", 1
       "audience": "//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider",
       "subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
       "token_url": "https://sts.googleapis.com/v1/token",
       "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client-email-address>:generateAccessToken", 2
       "credential_source": {
          "file": "/var/run/secrets/openshift/serviceaccount/token",
          "format": {
             "type": "text"
          }
       }
    }

    1
    The credential type is external_account.
    2
    The resource URL of the service account used by the Image Registry Operator.

18.6.3. Additional resources

Legal Notice

Copyright © 2024 Red Hat, Inc.

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.