Authentication and authorization
Configuring user authentication and access controls for users and services
Abstract
Chapter 1. Overview of authentication and authorization
1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization
This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization.
- authentication
- An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster.
- authorization
- Authorization determines whether the identified user has permissions to perform the requested action.
- bearer token
-
Bearer token is used to authenticate to API with the header
Authorization: Bearer <token>
.
- Cloud Credential Operator
- The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs).
- config map
-
A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type
ConfigMap
. Applications running in a pod can use this data. - containers
- Lightweight and executable images that consist of software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host.
- Custom Resource (CR)
- A CR is an extension of the Kubernetes API.
- group
- A group is a set of users. A group is useful for granting permissions to multiple users one time.
- HTPasswd
- HTPasswd updates the files that store usernames and password for authentication of HTTP users.
- Keystone
- Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services.
- Lightweight directory access protocol (LDAP)
- LDAP is a protocol that queries user information.
- manual mode
- In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO).
- mint mode
- Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required.
- namespace
- A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources.
- node
- A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine.
- OAuth client
- OAuth client is used to get a bearer token.
- OAuth server
- The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user’s identity from the configured identity provider and creates an access token.
- OpenID Connect
- The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers.
- passthrough mode
- In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials.
- pod
- A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node.
- regular users
- Users that are created automatically in the cluster upon first login or via the API.
- request header
- A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request.
- role-based access control (RBAC)
- A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles.
- service accounts
- Service accounts are used by the cluster components or applications.
- system users
- Users that are created automatically when the cluster is installed.
- users
- Users is an entity that can make requests to API.
1.2. About authentication in OpenShift Container Platform
To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster.
To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API.
If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error.
An administrator can configure authentication through the following tasks:
- Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster.
Configuring the internal OAuth server: The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user’s identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout, and customize the internal OAuth server URL.
NoteUsers can view and manage OAuth tokens owned by them.
Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients. You can register and configure additional OAuth clients.
NoteWhen users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token.
- Managing cloud provider credentials using the Cloud Credentials Operator: Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks.
- Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user.
1.3. About authorization in OpenShift Container Platform
Authorization involves determining whether the identified user has permissions to perform the requested action.
Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings. To understand how authorization works in OpenShift Container Platform, see Evaluating authorization.
You can also control access to an OpenShift Container Platform cluster through projects and namespaces.
Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs).
You can manage authorization for OpenShift Container Platform through the following tasks:
- Viewing local and cluster roles and bindings.
- Creating a local role and assigning it to a user or group.
- Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles. You can create additional cluster roles and add them to a user or group.
Creating a cluster-admin user: By default, your cluster has only one cluster administrator called
kubeadmin
. You can create another cluster administrator. Before creating a cluster administrator, ensure that you have configured an identity provider.NoteAfter creating the cluster admin user, delete the existing kubeadmin user to improve cluster security.
- Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user’s credentials. A user can create and use a service account in applications and also as an OAuth client.
- Scoping tokens: A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account.
- Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups.
Chapter 2. Understanding authentication
For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed.
As an administrator, you can configure authentication for OpenShift Container Platform.
2.1. Users
A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User
object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform.
Several types of users can exist:
User type | Description |
---|---|
|
This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the |
|
Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an |
|
These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the |
Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous
system user. After authentication, policy determines what the user is authorized to do.
2.2. Groups
A user can be assigned to one or more groups, each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually.
In addition to explicitly defined groups, there are also system groups, or virtual groups, that are automatically provisioned by the cluster.
The following default virtual groups are most important:
Virtual group | Description |
---|---|
| Automatically associated with all authenticated users. |
| Automatically associated with all users authenticated with an OAuth access token. |
| Automatically associated with all unauthenticated users. |
2.3. API authentication
Requests to the OpenShift Container Platform API are authenticated using the following methods:
- OAuth access tokens
-
Obtained from the OpenShift Container Platform OAuth server using the
<namespace_route>/oauth/authorize
and<namespace_route>/oauth/token
endpoints. -
Sent as an
Authorization: Bearer…
header. -
Sent as a websocket subprotocol header in the form
base64url.bearer.authorization.k8s.io.<base64url-encoded-token>
for websocket requests.
-
Obtained from the OpenShift Container Platform OAuth server using the
- X.509 client certificates
- Requires an HTTPS connection to the API server.
- Verified by the API server against a trusted certificate authority bundle.
- The API server creates and distributes certificates to controllers to authenticate themselves.
Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401
error.
If no access token or certificate is presented, the authentication layer assigns the system:anonymous
virtual user and the system:unauthenticated
virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make.
2.3.1. OpenShift Container Platform OAuth server
The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API.
When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request.
It then determines what user that identity maps to, creates an access token for that user, and returns the token for use.
2.3.1.1. OAuth token requests
Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API:
OAuth client | Usage |
---|---|
|
Requests tokens at |
|
Requests tokens with a user-agent that can handle |
<namespace_route>
refers to the namespace route. This is found by running the following command:$ oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host
All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize
. Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize
can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate
challenge in addition to interactive login flows.
If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize
endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate
challenges rather than displaying an interactive login page or redirecting to an interactive login flow.
To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token
header is on the request. Clients that expect to receive Basic WWW-Authenticate
challenges must set this header to a non-empty value.
If the authenticating proxy cannot support WWW-Authenticate
challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request
.
2.3.1.2. API impersonation
You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation.
2.3.1.3. Authentication metrics for Prometheus
OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts:
-
openshift_auth_basic_password_count
counts the number ofoc login
user name and password attempts. -
openshift_auth_basic_password_count_result
counts the number ofoc login
user name and password attempts by result,success
orerror
. -
openshift_auth_form_password_count
counts the number of web console login attempts. -
openshift_auth_form_password_count_result
counts the number of web console login attempts by result,success
orerror
. -
openshift_auth_password_total
counts the total number ofoc login
and web console login attempts.
Chapter 3. Configuring the internal OAuth server
3.1. OpenShift Container Platform OAuth server
The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API.
When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request.
It then determines what user that identity maps to, creates an access token for that user, and returns the token for use.
3.2. OAuth token request flows and responses
The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows.
When requesting an OAuth token using the implicit grant flow (response_type=token
) with a client_id configured to request WWW-Authenticate challenges
(like openshift-challenging-client
), these are the possible server responses from /oauth/authorize
, and how they should be handled:
Status | Content | Client response |
---|---|---|
302 |
|
Use the |
302 |
|
Fail, optionally surfacing the |
302 |
Other | Follow the redirect, and process the result using these rules. |
401 |
|
Respond to challenge if type is recognized (e.g. |
401 |
| No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token). |
Other | Other | Fail, optionally surfacing response body to the user. |
3.3. Options for the internal OAuth server
Several configuration options are available for the internal OAuth server.
3.3.1. OAuth token duration options
The internal OAuth server generates two kinds of tokens:
Token | Description |
---|---|
Access tokens | Longer-lived tokens that grant access to the API. |
Authorize codes | Short-lived tokens whose only use is to be exchanged for an access token. |
You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient
object definition.
3.3.2. OAuth grant options
When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client’s grant strategy.
The OAuth client requesting token must provide its own grant strategy.
You can apply the following default methods:
Grant option | Description |
---|---|
| Auto-approve the grant and retry the request. |
| Prompt the user to approve or deny the grant. |
3.4. Configuring the internal OAuth server’s token duration
You can configure default options for the internal OAuth server’s token duration.
By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses.
If the default time is insufficient, then this can be modified using the following procedure.
Procedure
Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default.
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1
- 1
- Set
accessTokenMaxAgeSeconds
to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used.
Apply the new configuration file:
NoteBecause you update the existing OAuth server, you must use the
oc apply
command to apply the change.$ oc apply -f </path/to/file.yaml>
Confirm that the changes are in effect:
$ oc describe oauth.config.openshift.io/cluster
Example output
... Spec: Token Config: Access Token Max Age Seconds: 172800 ...
3.5. Configuring token inactivity timeout for the internal OAuth server
You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set.
If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have configured an identity provider (IDP).
Procedure
Update the
OAuth
configuration to set a token inactivity timeout.Edit the
OAuth
object:$ oc edit oauth cluster
Add the
spec.tokenConfig.accessTokenInactivityTimeout
field and set your timeout value:apiVersion: config.openshift.io/v1 kind: OAuth metadata: ... spec: tokenConfig: accessTokenInactivityTimeout: 400s 1
- 1
- Set a value with the appropriate units, for example
400s
for 400 seconds, or30m
for 30 minutes. The minimum allowed timeout value is300s
.
- Save the file to apply the changes.
Check that the OAuth server pods have restarted:
$ oc get clusteroperators authentication
Do not continue to the next step until
PROGRESSING
is listed asFalse
, as shown in the following output:Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 145m
Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes.
$ oc get clusteroperators kube-apiserver
Do not continue to the next step until
PROGRESSING
is listed asFalse
, as shown in the following output:Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.17.0 True False False 145m
If
PROGRESSING
is showingTrue
, wait a few minutes and try again.
Verification
- Log in to the cluster with an identity from your IDP.
- Execute a command and verify that it was successful.
- Wait longer than the configured timeout without using the identity. In this procedure’s example, wait longer than 400 seconds.
Try to execute a command from the same identity’s session.
This command should fail because the token should have expired due to inactivity longer than the configured timeout.
Example output
error: You must be logged in to the server (Unauthorized)
3.6. Customizing the internal OAuth server URL
You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes
field of the cluster Ingress
configuration.
If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example:
$ oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1
- 1
- For self-signed certificates, the
ca.crt
file must contain the custom CA certificate, otherwise the login will not succeed.
The Cluster Authentication Operator publishes the OAuth server’s serving certificate in the oauth-serving-cert
config map in the openshift-config-managed
namespace. You can find the certificate in the data.ca-bundle.crt
key of the config map.
Prerequisites
- You have logged in to the cluster as a user with administrative privileges.
You have created a secret in the
openshift-config
namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches.TipYou can create a TLS secret by using the
oc create secret tls
command.
Procedure
Edit the cluster
Ingress
configuration:$ oc edit ingress.config.openshift.io cluster
Set the custom hostname and optionally the serving certificate and key:
apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2
- Save the file to apply the changes.
3.7. OAuth server metadata
Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route>
is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification.
Thus, any application running inside the cluster can issue a GET
request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information:
{ "issuer": "https://<namespace_route>", 1 "authorization_endpoint": "https://<namespace_route>/oauth/authorize", 2 "token_endpoint": "https://<namespace_route>/oauth/token", 3 "scopes_supported": [ 4 "user:full", "user:info", "user:check-access", "user:list-scoped-projects", "user:list-projects" ], "response_types_supported": [ 5 "code", "token" ], "grant_types_supported": [ 6 "authorization_code", "implicit" ], "code_challenge_methods_supported": [ 7 "plain", "S256" ] }
- 1
- The authorization server’s issuer identifier, which is a URL that uses the
https
scheme and has no query or fragment components. This is the location where.well-known
RFC 5785 resources containing information about the authorization server are published. - 2
- URL of the authorization server’s authorization endpoint. See RFC 6749.
- 3
- URL of the authorization server’s token endpoint. See RFC 6749.
- 4
- JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised.
- 5
- JSON array containing a list of the OAuth 2.0
response_type
values that this authorization server supports. The array values used are the same as those used with theresponse_types
parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591. - 6
- JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the
grant_types
parameter defined byOAuth 2.0 Dynamic Client Registration Protocol
in RFC 7591. - 7
- JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the
code_challenge_method
parameter defined in Section 4.3 of RFC 7636. The valid code challenge method values are those registered in the IANAPKCE Code Challenge Methods
registry. See IANA OAuth Parameters.
3.8. Troubleshooting OAuth API events
In some cases the API server returns an unexpected condition
error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server’s state.
A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition
server error during OAuth, run oc get events
to view these events under ServiceAccount
.
The following example warns of a service account that is missing a proper OAuth redirect URI:
$ oc get events | grep ServiceAccount
Example output
1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>
Running oc describe sa/<service_account_name>
reports any OAuth events associated with the given service account name.
$ oc describe sa/proxy | grep -A5 Events
Example output
Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>
The following is a list of the possible event errors:
No redirect URI annotations or an invalid URI is specified
Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>
Invalid route specified
Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]
Invalid reference type specified
Reason Message NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]
Missing SA tokens
Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens
Chapter 4. Configuring OAuth clients
Several OAuth clients are created by default in OpenShift Container Platform. You can also register and configure additional OAuth clients.
4.1. Default OAuth clients
The following OAuth clients are automatically created when starting the OpenShift Container Platform API:
OAuth client | Usage |
---|---|
|
Requests tokens at |
|
Requests tokens with a user-agent that can handle |
| Requests tokens by using a local HTTP server fetching an authorization code grant. |
<namespace_route>
refers to the namespace route. This is found by running the following command:$ oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host
4.2. Registering an additional OAuth client
If you need an additional OAuth client to manage authentication for your OpenShift Container Platform cluster, you can register one.
Procedure
To register additional OAuth clients:
$ oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: "..." 2 redirectURIs: - "http://www.example.com/" 3 grantMethod: prompt 4 ')
- 1
- The
name
of the OAuth client is used as theclient_id
parameter when making requests to<namespace_route>/oauth/authorize
and<namespace_route>/oauth/token
. - 2
- The
secret
is used as theclient_secret
parameter when making requests to<namespace_route>/oauth/token
. - 3
- The
redirect_uri
parameter specified in requests to<namespace_route>/oauth/authorize
and<namespace_route>/oauth/token
must be equal to or prefixed by one of the URIs listed in theredirectURIs
parameter value. - 4
- The
grantMethod
is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Specifyauto
to automatically approve the grant and retry the request, orprompt
to prompt the user to approve or deny the grant.
4.3. Configuring token inactivity timeout for an OAuth client
You can configure OAuth clients to expire OAuth tokens after a set period of inactivity. By default, no token inactivity timeout is set.
If the token inactivity timeout is also configured in the internal OAuth server configuration, the timeout that is set in the OAuth client overrides that value.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have configured an identity provider (IDP).
Procedure
Update the
OAuthClient
configuration to set a token inactivity timeout.Edit the
OAuthClient
object:$ oc edit oauthclient <oauth_client> 1
- 1
- Replace
<oauth_client>
with the OAuth client to configure, for example,console
.
Add the
accessTokenInactivityTimeoutSeconds
field and set your timeout value:apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: ... accessTokenInactivityTimeoutSeconds: 600 1
- 1
- The minimum allowed timeout value in seconds is
300
.
- Save the file to apply the changes.
Verification
- Log in to the cluster with an identity from your IDP. Be sure to use the OAuth client that you just configured.
- Perform an action and verify that it was successful.
- Wait longer than the configured timeout without using the identity. In this procedure’s example, wait longer than 600 seconds.
Try to perform an action from the same identity’s session.
This attempt should fail because the token should have expired due to inactivity longer than the configured timeout.
4.4. Additional resources
Chapter 5. Managing user-owned OAuth access tokens
Users can review their own OAuth access tokens and delete any that are no longer needed.
5.1. Listing user-owned OAuth access tokens
You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in.
Procedure
List all user-owned OAuth access tokens:
$ oc get useroauthaccesstokens
Example output
NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full
List user-owned OAuth access tokens for a particular OAuth client:
$ oc get useroauthaccesstokens --field-selector=clientName="console"
Example output
NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full
5.2. Viewing the details of a user-owned OAuth access token
You can view the details of a user-owned OAuth access token.
Procedure
Describe the details of a user-owned OAuth access token:
$ oc describe useroauthaccesstokens <token_name>
Example output
Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>
- 1
- The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in.
- 2
- The client name, which describes where the token originated from.
- 3
- The value in seconds from the creation time before this token expires.
- 4
- If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used.
- 5
- The scopes for this token.
- 6
- The user name associated with this token.
5.3. Deleting user-owned OAuth access tokens
The oc logout
command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed.
Deleting an OAuth access token logs out the user from all sessions that use the token.
Procedure
Delete the user-owned OAuth access token:
$ oc delete useroauthaccesstokens <token_name>
Example output
useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted
5.4. Adding unauthenticated groups to cluster roles
As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary.
You can add unauthenticated users to the following cluster roles:
-
system:scope-impersonation
-
system:webhook
-
system:oauth-token-deleter
-
self-access-reviewer
Always verify compliance with your organization’s security standards when modifying unauthenticated access.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a YAML file named
add-<cluster_role>-unauth.yaml
and add the following content:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated
Apply the configuration by running the following command:
$ oc apply -f add-<cluster_role>.yaml
Chapter 6. Understanding identity provider configuration
The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.
As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.
6.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
6.2. Supported identity providers
You can configure the following types of identity providers:
Identity provider | Description |
---|---|
Configure the | |
Configure the | |
Configure the | |
Configure a | |
Configure a | |
Configure a | |
Configure a | |
Configure a | |
Configure an |
Once an identity provider has been defined, you can use RBAC to define and apply permissions.
6.3. Removing the kubeadmin user
After you define an identity provider and create a new cluster-admin
user, you can remove the kubeadmin
to improve cluster security.
If you follow this procedure before another user is a cluster-admin
, then OpenShift Container Platform must be reinstalled. It is not possible to undo this command.
Prerequisites
- You must have configured at least one identity provider.
-
You must have added the
cluster-admin
role to a user. - You must be logged in as an administrator.
Procedure
Remove the
kubeadmin
secrets:$ oc delete secrets kubeadmin -n kube-system
6.4. Identity provider parameters
The following parameters are common to all identity providers:
Parameter | Description |
---|---|
| The provider name is prefixed to provider user names to form an identity name. |
| Defines how new identities are mapped to users when they log in. Enter one of the following values:
|
When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod
parameter to add
.
6.5. Sample identity provider CR
The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider.
Sample identity provider CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3
6.6. Manually provisioning a user when using the lookup mapping method
Typically, identities are automatically mapped to users during login. The lookup
mapping method disables this automatic mapping, which requires you to provision users manually. If you are using the lookup
mapping method, use the following procedure for each user after configuring the identity provider.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Create an OpenShift Container Platform user:
$ oc create user <username>
Create an OpenShift Container Platform identity:
$ oc create identity <identity_provider>:<identity_provider_user_id>
Where
<identity_provider_user_id>
is a name that uniquely represents the user in the identity provider.Create a user identity mapping for the created user and identity:
$ oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>
Chapter 7. Configuring identity providers
7.1. Configuring an htpasswd identity provider
Configure the htpasswd
identity provider to allow users to log in to OpenShift Container Platform with credentials from an htpasswd file.
To define an htpasswd identity provider, perform the following tasks:
-
Create an
htpasswd
file to store the user and password information. -
Create a secret to represent the
htpasswd
file. - Define an htpasswd identity provider resource that references the secret.
- Apply the resource to the default OAuth configuration to add the identity provider.
7.1.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.1.2. About htpasswd authentication
Using htpasswd authentication in OpenShift Container Platform allows you to identify users based on an htpasswd file. An htpasswd file is a flat file that contains the user name and hashed password for each user. You can use the htpasswd
utility to create this file.
Do not use htpasswd authentication in OpenShift Container Platform for production environments. Use htpasswd authentication only for development environments.
7.1.3. Creating the htpasswd file
See one of the following sections for instructions about how to create the htpasswd file:
7.1.3.1. Creating an htpasswd file using Linux
To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd
.
Prerequisites
-
Have access to the
htpasswd
utility. On Red Hat Enterprise Linux this is available by installing thehttpd-tools
package.
Procedure
Create or update your flat file with a user name and hashed password:
$ htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>
The command generates a hashed version of the password.
For example:
$ htpasswd -c -B -b users.htpasswd <username> <password>
Example output
Adding password for user user1
Continue to add or update credentials to the file:
$ htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>
7.1.3.2. Creating an htpasswd file using Windows
To use the htpasswd identity provider, you must generate a flat file that contains the user names and passwords for your cluster by using htpasswd
.
Prerequisites
-
Have access to
htpasswd.exe
. This file is included in the\bin
directory of many Apache httpd distributions.
Procedure
Create or update your flat file with a user name and hashed password:
> htpasswd.exe -c -B -b <\path\to\users.htpasswd> <username> <password>
The command generates a hashed version of the password.
For example:
> htpasswd.exe -c -B -b users.htpasswd <username> <password>
Example output
Adding password for user user1
Continue to add or update credentials to the file:
> htpasswd.exe -b <\path\to\users.htpasswd> <username> <password>
7.1.4. Creating the htpasswd secret
To use the htpasswd identity provider, you must define a secret that contains the htpasswd user file.
Prerequisites
- Create an htpasswd file.
Procedure
Create a
Secret
object that contains the htpasswd users file:$ oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1
- 1
- The secret key containing the users file for the
--from-file
argument must be namedhtpasswd
, as shown in the above command.
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>
7.1.5. Sample htpasswd CR
The following custom resource (CR) shows the parameters and acceptable values for an htpasswd identity provider.
htpasswd CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.1.6. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Log in to the cluster as a user from your identity provider, entering the password when prompted.
$ oc login -u <username>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.1.7. Updating users for an htpasswd identity provider
You can add or remove users from an existing htpasswd identity provider.
Prerequisites
-
You have created a
Secret
object that contains the htpasswd user file. This procedure assumes that it is namedhtpass-secret
. -
You have configured an htpasswd identity provider. This procedure assumes that it is named
my_htpasswd_provider
. -
You have access to the
htpasswd
utility. On Red Hat Enterprise Linux this is available by installing thehttpd-tools
package. - You have cluster administrator privileges.
Procedure
Retrieve the htpasswd file from the
htpass-secret
Secret
object and save the file to your file system:$ oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd
Add or remove users from the
users.htpasswd
file.To add a new user:
$ htpasswd -bB users.htpasswd <username> <password>
Example output
Adding password for user <username>
To remove an existing user:
$ htpasswd -D users.htpasswd <username>
Example output
Deleting password for user <username>
Replace the
htpass-secret
Secret
object with the updated users in theusers.htpasswd
file:$ oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -
TipYou can alternatively apply the following YAML to replace the secret:
apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>
If you removed one or more users, you must additionally remove existing resources for each user.
Delete the
User
object:$ oc delete user <username>
Example output
user.user.openshift.io "<username>" deleted
Be sure to remove the user, otherwise the user can continue using their token as long as it has not expired.
Delete the
Identity
object for the user:$ oc delete identity my_htpasswd_provider:<username>
Example output
identity.user.openshift.io "my_htpasswd_provider:<username>" deleted
7.1.8. Configuring identity providers using the web console
Configure your identity provider (IDP) through the web console instead of the CLI.
Prerequisites
- You must be logged in to the web console as a cluster administrator.
Procedure
- Navigate to Administration → Cluster Settings.
- Under the Configuration tab, click OAuth.
- Under the Identity Providers section, select your identity provider from the Add drop-down menu.
You can specify multiple IDPs through the web console without overwriting existing IDPs.
7.2. Configuring a Keystone identity provider
Configure the keystone
identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. This configuration allows users to log in to OpenShift Container Platform with their Keystone credentials.
7.2.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.2.2. About Keystone authentication
Keystone is an OpenStack project that provides identity, token, catalog, and policy services.
You can configure the integration with Keystone so that the new OpenShift Container Platform users are based on either the Keystone user names or unique Keystone IDs. With both methods, users log in by entering their Keystone user name and password. Basing the OpenShift Container Platform users on the Keystone ID is more secure because if you delete a Keystone user and create a new Keystone user with that user name, the new user might have access to the old user’s resources.
7.2.3. Creating the secret
Identity providers use OpenShift Container Platform Secret
objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
Create a
Secret
object that contains the key and certificate by using the following command:$ oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>
7.2.4. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.2.5. Sample Keystone CR
The following custom resource (CR) shows the parameters and acceptable values for a Keystone identity provider.
Keystone CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret
- 1
- This provider name is prefixed to provider user names to form an identity name.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- Keystone domain name. In Keystone, usernames are domain-specific. Only a single domain is supported.
- 4
- The URL to use to connect to the Keystone server (required). This must use https.
- 5
- Optional: Reference to an OpenShift Container Platform
ConfigMap
object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. - 6
- Optional: Reference to an OpenShift Container Platform
Secret
object containing the client certificate to present when making requests to the configured URL. - 7
- Reference to an OpenShift Container Platform
Secret
object containing the key for the client certificate. Required iftlsClientCert
is specified.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.2.6. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Log in to the cluster as a user from your identity provider, entering the password when prompted.
$ oc login -u <username>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.3. Configuring an LDAP identity provider
Configure the ldap
identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication.
7.3.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.3.2. About LDAP authentication
During authentication, the LDAP directory is searched for an entry that matches the provided user name. If a single unique match is found, a simple bind is attempted using the distinguished name (DN) of the entry plus the provided password.
These are the steps taken:
-
Generate a search filter by combining the attribute and filter in the configured
url
with the user-provided user name. - Search the directory using the generated filter. If the search does not return exactly one entry, deny access.
- Attempt to bind to the LDAP server using the DN of the entry retrieved from the search, and the user-provided password.
- If the bind is unsuccessful, deny access.
- If the bind is successful, build an identity using the configured attributes as the identity, email address, display name, and preferred user name.
The configured url
is an RFC 2255 URL, which specifies the LDAP host and search parameters to use. The syntax of the URL is:
ldap://host:port/basedn?attribute?scope?filter
For this URL:
URL component | Description |
---|---|
|
For regular LDAP, use the string |
|
The name and port of the LDAP server. Defaults to |
| The DN of the branch of the directory where all searches should start from. At the very least, this must be the top of your directory tree, but it could also specify a subtree in the directory. |
|
The attribute to search for. Although RFC 2255 allows a comma-separated list of attributes, only the first attribute will be used, no matter how many are provided. If no attributes are provided, the default is to use |
|
The scope of the search. Can be either |
|
A valid LDAP search filter. If not provided, defaults to |
When doing searches, the attribute, filter, and provided user name are combined to create a search filter that looks like:
(&(<filter>)(<attribute>=<username>))
For example, consider a URL of:
ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)
When a client attempts to connect using a user name of bob
, the resulting search filter will be (&(enabled=true)(cn=bob))
.
If the LDAP directory requires authentication to search, specify a bindDN
and bindPassword
to use to perform the entry search.
7.3.3. Creating the LDAP secret
To use the identity provider, you must define an OpenShift Container Platform Secret
object that contains the bindPassword
field.
Procedure
Create a
Secret
object that contains thebindPassword
field:$ oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1
- 1
- The secret key containing the bindPassword for the
--from-literal
argument must be calledbindPassword
.
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>
7.3.4. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.3.5. Sample LDAP CR
The following custom resource (CR) shows the parameters and acceptable values for an LDAP identity provider.
LDAP CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: "" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: "ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid" 11
- 1
- This provider name is prefixed to the returned user ID to form an identity name.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- List of attributes to use as the identity. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. Defined attributes are retrieved as raw, allowing for binary values to be used.
- 4
- List of attributes to use as the email address. First non-empty attribute is used.
- 5
- List of attributes to use as the display name. First non-empty attribute is used.
- 6
- List of attributes to use as the preferred user name when provisioning a user for this identity. First non-empty attribute is used.
- 7
- Optional DN to use to bind during the search phase. Must be set if
bindPassword
is defined. - 8
- Optional reference to an OpenShift Container Platform
Secret
object containing the bind password. Must be set ifbindDN
is defined. - 9
- Optional: Reference to an OpenShift Container Platform
ConfigMap
object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only used wheninsecure
isfalse
. - 10
- When
true
, no TLS connection is made to the server. Whenfalse
,ldaps://
URLs connect using TLS, andldap://
URLs are upgraded to TLS. This must be set tofalse
whenldaps://
URLs are in use, as these URLs always attempt to connect using TLS. - 11
- An RFC 2255 URL which specifies the LDAP host and search parameters to use.
To whitelist users for an LDAP integration, use the lookup
mapping method. Before a login from LDAP would be allowed, a cluster administrator must create an Identity
object and a User
object for each LDAP user.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.3.6. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Log in to the cluster as a user from your identity provider, entering the password when prompted.
$ oc login -u <username>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.4. Configuring a basic authentication identity provider
Configure the basic-authentication
identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic back-end integration mechanism.
7.4.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.4.2. About basic authentication
Basic authentication is a generic back-end integration mechanism that allows users to log in to OpenShift Container Platform with credentials validated against a remote identity provider.
Because basic authentication is generic, you can use this identity provider for advanced authentication configurations.
Basic authentication must use an HTTPS connection to the remote server to prevent potential snooping of the user ID and password and man-in-the-middle attacks.
With basic authentication configured, users send their user name and password to OpenShift Container Platform, which then validates those credentials against a remote server by making a server-to-server request, passing the credentials as a basic authentication header. This requires users to send their credentials to OpenShift Container Platform during login.
This only works for user name/password login mechanisms, and OpenShift Container Platform must be able to make network requests to the remote authentication server.
User names and passwords are validated against a remote URL that is protected by basic authentication and returns JSON.
A 401
response indicates failed authentication.
A non-200
status, or the presence of a non-empty "error" key, indicates an error:
{"error":"Error message"}
A 200
status with a sub
(subject) key indicates success:
{"sub":"userid"} 1
- 1
- The subject must be unique to the authenticated user and must not be able to be modified.
A successful response can optionally provide additional data, such as:
A display name using the
name
key. For example:{"sub":"userid", "name": "User Name", ...}
An email address using the
email
key. For example:{"sub":"userid", "email":"user@example.com", ...}
A preferred user name using the
preferred_username
key. This is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity. For example:{"sub":"014fbff9a07c", "preferred_username":"bob", ...}
7.4.3. Creating the secret
Identity providers use OpenShift Container Platform Secret
objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
Create a
Secret
object that contains the key and certificate by using the following command:$ oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>
7.4.4. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.4.5. Sample basic authentication CR
The following custom resource (CR) shows the parameters and acceptable values for a basic authentication identity provider.
Basic authentication CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret
- 1
- This provider name is prefixed to the returned user ID to form an identity name.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- URL accepting credentials in Basic authentication headers.
- 4
- Optional: Reference to an OpenShift Container Platform
ConfigMap
object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. - 5
- Optional: Reference to an OpenShift Container Platform
Secret
object containing the client certificate to present when making requests to the configured URL. - 6
- Reference to an OpenShift Container Platform
Secret
object containing the key for the client certificate. Required iftlsClientCert
is specified.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.4.6. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Log in to the cluster as a user from your identity provider, entering the password when prompted.
$ oc login -u <username>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.4.7. Example Apache HTTPD configuration for basic identity providers
The basic identify provider (IDP) configuration in OpenShift Container Platform 4 requires that the IDP server respond with JSON for success and failures. You can use CGI scripting in Apache HTTPD to accomplish this. This section provides examples.
Example /etc/httpd/conf.d/login.conf
<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName "Please Log In" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>
Example /var/www/cgi-bin/login.cgi
#!/bin/bash echo "Content-Type: application/json" echo "" echo '{"sub":"userid", "name":"'$REMOTE_USER'"}' exit 0
Example /var/www/cgi-bin/fail.cgi
#!/bin/bash echo "Content-Type: application/json" echo "" echo '{"error": "Login failure"}' exit 0
7.4.7.1. File requirements
These are the requirements for the files you create on an Apache HTTPD web server:
-
login.cgi
andfail.cgi
must be executable (chmod +x
). -
login.cgi
andfail.cgi
must have proper SELinux contexts if SELinux is enabled:restorecon -RFv /var/www/cgi-bin
, or ensure that the context ishttpd_sys_script_exec_t
usingls -laZ
. -
login.cgi
is only executed if your user successfully logs in perRequire and Auth
directives. -
fail.cgi
is executed if the user fails to log in, resulting in anHTTP 401
response.
7.4.8. Basic authentication troubleshooting
The most common issue relates to network connectivity to the backend server. For simple debugging, run curl
commands on the master. To test for a successful login, replace the <user>
and <password>
in the following example command with valid credentials. To test an invalid login, replace them with false credentials.
$ curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp
Successful responses
A 200
status with a sub
(subject) key indicates success:
{"sub":"userid"}
The subject must be unique to the authenticated user, and must not be able to be modified.
A successful response can optionally provide additional data, such as:
A display name using the
name
key:{"sub":"userid", "name": "User Name", ...}
An email address using the
email
key:{"sub":"userid", "email":"user@example.com", ...}
A preferred user name using the
preferred_username
key:{"sub":"014fbff9a07c", "preferred_username":"bob", ...}
The
preferred_username
key is useful when the unique, unchangeable subject is a database key or UID, and a more human-readable name exists. This is used as a hint when provisioning the OpenShift Container Platform user for the authenticated identity.
Failed responses
-
A
401
response indicates failed authentication. -
A non-
200
status or the presence of a non-empty "error" key indicates an error:{"error":"Error message"}
7.5. Configuring a request header identity provider
Configure the request-header
identity provider to identify users from request header values, such as X-Remote-User
. It is typically used in combination with an authenticating proxy, which sets the request header value.
7.5.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.5.2. About request header authentication
A request header identity provider identifies users from request header values, such as X-Remote-User
. It is typically used in combination with an authenticating proxy, which sets the request header value. The request header identity provider cannot be combined with other identity providers that use direct password logins, such as htpasswd, Keystone, LDAP or basic authentication.
You can also use the request header identity provider for advanced configurations such as the community-supported SAML authentication. Note that this solution is not supported by Red Hat.
For users to authenticate using this identity provider, they must access https://<namespace_route>/oauth/authorize
(and subpaths) via an authenticating proxy. To accomplish this, configure the OAuth server to redirect unauthenticated requests for OAuth tokens to the proxy endpoint that proxies to https://<namespace_route>/oauth/authorize
.
To redirect unauthenticated requests from clients expecting browser-based login flows:
-
Set the
provider.loginURL
parameter to the authenticating proxy URL that will authenticate interactive clients and then proxy the request tohttps://<namespace_route>/oauth/authorize
.
To redirect unauthenticated requests from clients expecting WWW-Authenticate
challenges:
-
Set the
provider.challengeURL
parameter to the authenticating proxy URL that will authenticate clients expectingWWW-Authenticate
challenges and then proxy the request tohttps://<namespace_route>/oauth/authorize
.
The provider.challengeURL
and provider.loginURL
parameters can include the following tokens in the query portion of the URL:
${url}
is replaced with the current URL, escaped to be safe in a query parameter.For example:
https://www.example.com/sso-login?then=${url}
${query}
is replaced with the current query string, unescaped.For example:
https://www.example.com/auth-proxy/oauth/authorize?${query}
As of OpenShift Container Platform 4.1, your proxy must support mutual TLS.
7.5.2.1. SSPI connection support on Microsoft Windows
Using SSPI connection support on Microsoft Windows is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The OpenShift CLI (oc
) supports the Security Support Provider Interface (SSPI) to allow for SSO flows on Microsft Windows. If you use the request header identity provider with a GSSAPI-enabled proxy to connect an Active Directory server to OpenShift Container Platform, users can automatically authenticate to OpenShift Container Platform by using the oc
command line interface from a domain-joined Microsoft Windows computer.
7.5.3. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.5.4. Sample request header CR
The following custom resource (CR) shows the parameters and acceptable values for a request header identity provider.
Request header CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: "https://www.example.com/challenging-proxy/oauth/authorize?${query}" 3 loginURL: "https://www.example.com/login-proxy/oauth/authorize?${query}" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login
- 1
- This provider name is prefixed to the user name in the request header to form an identity name.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- Optional: URL to redirect unauthenticated
/oauth/authorize
requests to, that will authenticate browser-based clients and then proxy their request tohttps://<namespace_route>/oauth/authorize
. The URL that proxies tohttps://<namespace_route>/oauth/authorize
must end with/authorize
(with no trailing slash), and also proxy subpaths, in order for OAuth approval flows to work properly.${url}
is replaced with the current URL, escaped to be safe in a query parameter.${query}
is replaced with the current query string. If this attribute is not defined, thenloginURL
must be used. - 4
- Optional: URL to redirect unauthenticated
/oauth/authorize
requests to, that will authenticate clients which expectWWW-Authenticate
challenges, and then proxy them tohttps://<namespace_route>/oauth/authorize
.${url}
is replaced with the current URL, escaped to be safe in a query parameter.${query}
is replaced with the current query string. If this attribute is not defined, thenchallengeURL
must be used. - 5
- Reference to an OpenShift Container Platform
ConfigMap
object containing a PEM-encoded certificate bundle. Used as a trust anchor to validate the TLS certificates presented by the remote server.ImportantAs of OpenShift Container Platform 4.1, the
ca
field is required for this identity provider. This means that your proxy must support mutual TLS. - 6
- Optional: list of common names (
cn
). If set, a valid client certificate with a Common Name (cn
) in the specified list must be presented before the request headers are checked for user names. If empty, any Common Name is allowed. Can only be used in combination withca
. - 7
- Header names to check, in order, for the user identity. The first header containing a value is used as the identity. Required, case-insensitive.
- 8
- Header names to check, in order, for an email address. The first header containing a value is used as the email address. Optional, case-insensitive.
- 9
- Header names to check, in order, for a display name. The first header containing a value is used as the display name. Optional, case-insensitive.
- 10
- Header names to check, in order, for a preferred user name, if different than the immutable identity determined from the headers specified in
headers
. The first header containing a value is used as the preferred user name when provisioning. Optional, case-insensitive.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.5.5. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Log in to the cluster as a user from your identity provider, entering the password when prompted.
$ oc login -u <username>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.5.6. Example Apache authentication configuration using request header
This example configures an Apache authentication proxy for the OpenShift Container Platform using the request header identity provider.
Custom proxy configuration
Using the mod_auth_gssapi
module is a popular way to configure the Apache authentication proxy using the request header identity provider; however, it is not required. Other proxies can easily be used if the following requirements are met:
-
Block the
X-Remote-User
header from client requests to prevent spoofing. -
Enforce client certificate authentication in the
RequestHeaderIdentityProvider
configuration. -
Require the
X-Csrf-Token
header be set for all authentication requests using the challenge flow. -
Make sure only the
/oauth/authorize
endpoint and its subpaths are proxied; redirects must be rewritten to allow the backend server to send the client to the correct location. -
The URL that proxies to
https://<namespace_route>/oauth/authorize
must end with/authorize
with no trailing slash. For example,https://proxy.example.com/login-proxy/authorize?…
must proxy tohttps://<namespace_route>/oauth/authorize?…
. -
Subpaths of the URL that proxies to
https://<namespace_route>/oauth/authorize
must proxy to subpaths ofhttps://<namespace_route>/oauth/authorize
. For example,https://proxy.example.com/login-proxy/authorize/approve?…
must proxy tohttps://<namespace_route>/oauth/authorize/approve?…
.
The https://<namespace_route>
address is the route to the OAuth server and can be obtained by running oc get route -n openshift-authentication
.
Configuring Apache authentication using request header
This example uses the mod_auth_gssapi
module to configure an Apache authentication proxy using the request header identity provider.
Prerequisites
Obtain the
mod_auth_gssapi
module from the Optional channel. You must have the following packages installed on your local machine:-
httpd
-
mod_ssl
-
mod_session
-
apr-util-openssl
-
mod_auth_gssapi
-
Generate a CA for validating requests that submit the trusted header. Define an OpenShift Container Platform
ConfigMap
object containing the CA. This is done by running:$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1
- 1
- The CA must be stored in the
ca.crt
key of theConfigMap
object.
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
- Generate a client certificate for the proxy. You can generate this certificate by using any x509 certificate tooling. The client certificate must be signed by the CA you generated for validating requests that submit the trusted header.
- Create the custom resource (CR) for your identity providers.
Procedure
This proxy uses a client certificate to connect to the OAuth server, which is configured to trust the X-Remote-User
header.
-
Create the certificate for the Apache configuration. The certificate that you specify as the
SSLProxyMachineCertificateFile
parameter value is the proxy’s client certificate that is used to authenticate the proxy to the server. It must useTLS Web Client Authentication
as the extended key type. Create the Apache configuration. Use the following template to provide your required settings and values:
ImportantCarefully review the template and customize its contents to fit your environment.
LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so # Some Apache configurations might require these modules. # LoadModule auth_form_module modules/mod_auth_form.so # LoadModule session_module modules/mod_session.so # Nothing needs to be served over HTTP. This virtual host simply redirects to # HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^$ [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName "SSO Login" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL "ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName "SSO Login" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User
NoteThe
https://<namespace_route>
address is the route to the OAuth server and can be obtained by runningoc get route -n openshift-authentication
.Update the
identityProviders
stanza in the custom resource (CR):identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: "https://<namespace_route>/challenging-proxy/oauth/authorize?${query}" loginURL: "https://<namespace_route>/login-proxy/oauth/authorize?${query}" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User
Verify the configuration.
Confirm that you can bypass the proxy by requesting a token by supplying the correct client certificate and header:
# curl -L -k -H "X-Remote-User: joe" \ --cert /etc/pki/tls/certs/authproxy.pem \ https://<namespace_route>/oauth/token/request
Confirm that requests that do not supply the client certificate fail by requesting a token without the certificate:
# curl -L -k -H "X-Remote-User: joe" \ https://<namespace_route>/oauth/token/request
Confirm that the
challengeURL
redirect is active:# curl -k -v -H 'X-Csrf-Token: 1' \ https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token
Copy the
challengeURL
redirect to use in the next step.Run this command to show a
401
response with aWWW-Authenticate
basic challenge, a negotiate challenge, or both challenges:# curl -k -v -H 'X-Csrf-Token: 1' \ <challengeURL_redirect + query>
Test logging in to the OpenShift CLI (
oc
) with and without using a Kerberos ticket:If you generated a Kerberos ticket by using
kinit
, destroy it:# kdestroy -c cache_name 1
- 1
- Make sure to provide the name of your Kerberos cache.
Log in to the
oc
tool by using your Kerberos credentials:# oc login -u <username>
Enter your Kerberos password at the prompt.
Log out of the
oc
tool:# oc logout
Use your Kerberos credentials to get a ticket:
# kinit
Enter your Kerberos user name and password at the prompt.
Confirm that you can log in to the
oc
tool:# oc login
If your configuration is correct, you are logged in without entering separate credentials.
7.6. Configuring a GitHub or GitHub Enterprise identity provider
Configure the github
identity provider to validate user names and passwords against GitHub or GitHub Enterprise’s OAuth authentication server. OAuth facilitates a token exchange flow between OpenShift Container Platform and GitHub or GitHub Enterprise.
You can use the GitHub integration to connect to either GitHub or GitHub Enterprise. For GitHub Enterprise integrations, you must provide the hostname
of your instance and can optionally provide a ca
certificate bundle to use in requests to the server.
The following steps apply to both GitHub and GitHub Enterprise unless noted.
7.6.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.6.2. About GitHub authentication
Configuring GitHub authentication allows users to log in to OpenShift Container Platform with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Container Platform cluster, you can restrict access to only those in specific GitHub organizations.
7.6.3. Registering a GitHub application
To use GitHub or GitHub Enterprise as an identity provider, you must register an application to use.
Procedure
Register an application on GitHub:
- For GitHub, click Settings → Developer settings → OAuth Apps → Register a new OAuth application.
- For GitHub Enterprise, go to your GitHub Enterprise home page and then click Settings → Developer settings → Register a new application.
-
Enter an application name, for example
My OpenShift Install
. -
Enter a homepage URL, such as
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>
. - Optional: Enter an application description.
Enter the authorization callback URL, where the end of the URL contains the identity provider
name
:https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>
For example:
https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github
- Click Register application. GitHub provides a client ID and a client secret. You need these values to complete the identity provider configuration.
7.6.4. Creating the secret
Identity providers use OpenShift Container Platform Secret
objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
Create a
Secret
object containing a string by using the following command:$ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>
You can define a
Secret
object containing the contents of a file by using the following command:$ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config
7.6.5. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
This procedure is only required for GitHub Enterprise.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.6.6. Sample GitHub CR
The following custom resource (CR) shows the parameters and acceptable values for a GitHub identity provider.
GitHub CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b
- 1
- This provider name is prefixed to the GitHub numeric user ID to form an identity name. It is also used to build the callback URL.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- Optional: Reference to an OpenShift Container Platform
ConfigMap
object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL. Only for use in GitHub Enterprise with a non-publicly trusted root certificate. - 4
- The client ID of a registered GitHub OAuth application. The application must be configured with a callback URL of
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>
. - 5
- Reference to an OpenShift Container Platform
Secret
object containing the client secret issued by GitHub. - 6
- For GitHub Enterprise, you must provide the hostname of your instance, such as
example.com
. This value must match the GitHub Enterprisehostname
value in in the/setup/settings
file and cannot include a port number. If this value is not set, then eitherteams
ororganizations
must be defined. For GitHub, omit this parameter. - 7
- The list of organizations. Either the
organizations
orteams
field must be set unless thehostname
field is set, or ifmappingMethod
is set tolookup
. Cannot be used in combination with theteams
field. - 8
- The list of teams. Either the
teams
ororganizations
field must be set unless thehostname
field is set, or ifmappingMethod
is set tolookup
. Cannot be used in combination with theorganizations
field.
If organizations
or teams
is specified, only GitHub users that are members of at least one of the listed organizations will be allowed to log in. If the GitHub OAuth application configured in clientID
is not owned by the organization, an organization owner must grant third-party access to use this option. This can be done during the first GitHub login by the organization’s administrator, or from the GitHub organization settings.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.6.7. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Obtain a token from the OAuth server.
As long as the
kubeadmin
user has been removed, theoc login
command provides instructions on how to access a web page where you can retrieve the token.You can also access this page from the web console by navigating to (?) Help → Command Line Tools → Copy Login Command.
Log in to the cluster, passing in the token to authenticate.
$ oc login --token=<token>
NoteThis identity provider does not support logging in with a user name and password.
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.7. Configuring a GitLab identity provider
Configure the gitlab
identity provider using GitLab.com or any other GitLab instance as an identity provider.
7.7.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.7.2. About GitLab authentication
Configuring GitLab authentication allows users to log in to OpenShift Container Platform with their GitLab credentials.
If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration. If you use GitLab version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth.
7.7.3. Creating the secret
Identity providers use OpenShift Container Platform Secret
objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
Create a
Secret
object containing a string by using the following command:$ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>
You can define a
Secret
object containing the contents of a file by using the following command:$ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config
7.7.4. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
This procedure is only required for GitHub Enterprise.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.7.5. Sample GitLab CR
The following custom resource (CR) shows the parameters and acceptable values for a GitLab identity provider.
GitLab CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map
- 1
- This provider name is prefixed to the GitLab numeric user ID to form an identity name. It is also used to build the callback URL.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- The client ID of a registered GitLab OAuth application. The application must be configured with a callback URL of
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>
. - 4
- Reference to an OpenShift Container Platform
Secret
object containing the client secret issued by GitLab. - 5
- The host URL of a GitLab provider. This could either be
https://gitlab.com/
or any other self hosted instance of GitLab. - 6
- Optional: Reference to an OpenShift Container Platform
ConfigMap
object containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.7.6. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Log in to the cluster as a user from your identity provider, entering the password when prompted.
$ oc login -u <username>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.8. Configuring a Google identity provider
Configure the google
identity provider using the Google OpenID Connect integration.
7.8.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.8.2. About Google authentication
Using Google as an identity provider allows any Google user to authenticate to your server. You can limit authentication to members of a specific hosted domain with the hostedDomain
configuration attribute.
Using Google as an identity provider requires users to get a token using <namespace_route>/oauth/token/request
to use with command-line tools.
7.8.3. Creating the secret
Identity providers use OpenShift Container Platform Secret
objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
Create a
Secret
object containing a string by using the following command:$ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>
You can define a
Secret
object containing the contents of a file by using the following command:$ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config
7.8.4. Sample Google CR
The following custom resource (CR) shows the parameters and acceptable values for a Google identity provider.
Google CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: "example.com" 5
- 1
- This provider name is prefixed to the Google numeric user ID to form an identity name. It is also used to build the redirect URL.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- The client ID of a registered Google project. The project must be configured with a redirect URI of
https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>
. - 4
- Reference to an OpenShift Container Platform
Secret
object containing the client secret issued by Google. - 5
- A hosted domain used to restrict sign-in accounts. Optional if the
lookup
mappingMethod
is used. If empty, any Google account is allowed to authenticate.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.8.5. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Obtain a token from the OAuth server.
As long as the
kubeadmin
user has been removed, theoc login
command provides instructions on how to access a web page where you can retrieve the token.You can also access this page from the web console by navigating to (?) Help → Command Line Tools → Copy Login Command.
Log in to the cluster, passing in the token to authenticate.
$ oc login --token=<token>
NoteThis identity provider does not support logging in with a user name and password.
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.9. Configuring an OpenID Connect identity provider
Configure the oidc
identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow.
7.9.1. About identity providers in OpenShift Container Platform
By default, only a kubeadmin
user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.
OpenShift Container Platform user names containing /
, :
, and %
are not supported.
7.9.2. About OpenID Connect authentication
The Authentication Operator in OpenShift Container Platform requires that the configured OpenID Connect identity provider implements the OpenID Connect Discovery specification.
ID Token
and UserInfo
decryptions are not supported.
By default, the openid
scope is requested. If required, extra scopes can be specified in the extraScopes
field.
Claims are read from the JWT id_token
returned from the OpenID identity provider and, if specified, from the JSON returned by the UserInfo
URL.
At least one claim must be configured to use as the user’s identity. The standard identity claim is sub
.
You can also indicate which claims to use as the user’s preferred user name, display name, and email address. If multiple claims are specified, the first one with a non-empty value is used. The following table lists the standard claims:
Claim | Description |
---|---|
| Short for "subject identifier." The remote identity for the user at the issuer. |
|
The preferred user name when provisioning a user. A shorthand name that the user wants to be referred to as, such as |
| Email address. |
| Display name. |
See the OpenID claims documentation for more information.
Unless your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, users must get a token from <namespace_route>/oauth/token/request
to use with command-line tools.
7.9.3. Supported OIDC providers
Red Hat tests and supports specific OpenID Connect (OIDC) providers with OpenShift Container Platform. The following OpenID Connect (OIDC) providers are tested and supported with OpenShift Container Platform. Using an OIDC provider that is not on the following list might work with OpenShift Container Platform, but the provider was not tested by Red Hat and therefore is not supported by Red Hat.
Active Directory Federation Services for Windows Server
NoteCurrently, it is not supported to use Active Directory Federation Services for Windows Server with OpenShift Container Platform when custom claims are used.
- GitLab
- Keycloak
Microsoft identity platform (Azure Active Directory v2.0)
NoteCurrently, it is not supported to use Microsoft identity platform when group names are required to be synced.
- Okta
- Ping Identity
- Red Hat Single Sign-On
7.9.4. Creating the secret
Identity providers use OpenShift Container Platform Secret
objects in the openshift-config
namespace to contain the client secret, client certificates, and keys.
Procedure
Create a
Secret
object containing a string by using the following command:$ oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config
TipYou can alternatively apply the following YAML to create the secret:
apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>
You can define a
Secret
object containing the contents of a file by using the following command:$ oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config
7.9.5. Creating a config map
Identity providers use OpenShift Container Platform ConfigMap
objects in the openshift-config
namespace to contain the certificate authority bundle. These are primarily used to contain certificate bundles needed by the identity provider.
This procedure is only required for GitHub Enterprise.
Procedure
Define an OpenShift Container Platform
ConfigMap
object containing the certificate authority by using the following command. The certificate authority must be stored in theca.crt
key of theConfigMap
object.$ oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
TipYou can alternatively apply the following YAML to create the config map:
apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>
7.9.6. Sample OpenID Connect CRs
The following custom resources (CRs) show the parameters and acceptable values for an OpenID Connect identity provider.
If you must specify a custom certificate bundle, extra scopes, extra authorization request parameters, or a userInfo
URL, use the full OpenID Connect CR.
Standard OpenID Connect CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6
- 1
- This provider name is prefixed to the value of the identity claim to form an identity name. It is also used to build the redirect URL.
- 2
- Controls how mappings are established between this provider’s identities and
User
objects. - 3
- The client ID of a client registered with the OpenID provider. The client must be allowed to redirect to
https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>
. - 4
- A reference to an OpenShift Container Platform
Secret
object containing the client secret. - 5
- The list of claims to use as the identity. The first non-empty claim is used.
- 6
- The Issuer Identifier described in the OpenID spec. Must use
https
without query or fragment component.
Full OpenID Connect CR
apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: ... clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: "true" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com
- 1
- Optional: Reference to an OpenShift Container Platform config map containing the PEM-encoded certificate authority bundle to use in validating server certificates for the configured URL.
- 2
- Optional: The list of scopes to request, in addition to the
openid
scope, during the authorization token request. - 3
- Optional: A map of extra parameters to add to the authorization token request.
- 4
- The list of claims to use as the preferred user name when provisioning a user for this identity. The first non-empty claim is used.
- 5
- The list of claims to use as the display name. The first non-empty claim is used.
- 6
- The list of claims to use as the email address. The first non-empty claim is used.
- 7
- The list of claims to use to synchronize groups from the OpenID Connect provider to OpenShift Container Platform upon user login. The first non-empty claim is used.
Additional resources
-
See Identity provider parameters for information on parameters, such as
mappingMethod
, that are common to all identity providers.
7.9.7. Adding an identity provider to your cluster
After you install your cluster, add an identity provider to it so your users can authenticate.
Prerequisites
- Create an OpenShift Container Platform cluster.
- Create the custom resource (CR) for your identity providers.
- You must be logged in as an administrator.
Procedure
Apply the defined CR:
$ oc apply -f </path/to/CR>
NoteIf a CR does not exist,
oc apply
creates a new CR and might trigger the following warning:Warning: oc apply should be used on resources created by either oc create --save-config or oc apply
. In this case you can safely ignore this warning.Obtain a token from the OAuth server.
As long as the
kubeadmin
user has been removed, theoc login
command provides instructions on how to access a web page where you can retrieve the token.You can also access this page from the web console by navigating to (?) Help → Command Line Tools → Copy Login Command.
Log in to the cluster, passing in the token to authenticate.
$ oc login --token=<token>
NoteIf your OpenID Connect identity provider supports the resource owner password credentials (ROPC) grant flow, you can log in with a user name and password. You might need to take steps to enable the ROPC grant flow for your identity provider.
After the OIDC identity provider is configured in OpenShift Container Platform, you can log in by using the following command, which prompts for your user name and password:
$ oc login -u <identity_provider_username> --server=<api_server_url_and_port>
Confirm that the user logged in successfully, and display the user name.
$ oc whoami
7.9.8. Configuring identity providers using the web console
Configure your identity provider (IDP) through the web console instead of the CLI.
Prerequisites
- You must be logged in to the web console as a cluster administrator.
Procedure
- Navigate to Administration → Cluster Settings.
- Under the Configuration tab, click OAuth.
- Under the Identity Providers section, select your identity provider from the Add drop-down menu.
You can specify multiple IDPs through the web console without overwriting existing IDPs.
Chapter 8. Using RBAC to define and apply permissions
8.1. RBAC overview
Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project.
Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.
Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.
Authorization is managed using:
Authorization object | Description |
---|---|
Rules |
Sets of permitted verbs on a set of objects. For example, whether a user or service account can |
Roles | Collections of rules. You can associate, or bind, users and groups to multiple roles. |
Bindings | Associations between users and/or groups with a role. |
There are two levels of RBAC roles and bindings that control authorization:
RBAC level | Description |
---|---|
Cluster RBAC | Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. |
Local RBAC | Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. |
A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.
This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles.
During evaluation, both the cluster role bindings and the local role bindings are used. For example:
- Cluster-wide "allow" rules are checked.
- Locally-bound "allow" rules are checked.
- Deny by default.
8.1.1. Default cluster roles
OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally.
It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly.
Default cluster role | Description |
---|---|
|
A project manager. If used in a local binding, an |
| A user that can get basic information about projects and users. |
| A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. |
| A user that can get basic cluster status information. |
| A user that can get or view most of the objects but cannot modify them. |
| A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. |
| A user that can create their own projects. |
| A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. |
Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin
role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin
to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin
, plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin
.
The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below.
The get pods/exec
, get pods/*
, and get *
rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges.
8.1.2. Evaluating authorization
OpenShift Container Platform evaluates authorization by using:
- Identity
- The user name and list of groups that the user belongs to.
- Action
The action you perform. In most cases, this consists of:
- Project: The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities.
-
Verb : The action itself:
get
,list
,create
,update
,delete
,deletecollection
, orwatch
. - Resource name: The API endpoint that you access.
- Bindings
- The full list of bindings, the associations between users or groups with a role.
OpenShift Container Platform evaluates authorization by using the following steps:
- The identity and the project-scoped action is used to find all bindings that apply to the user or their groups.
- Bindings are used to locate all the roles that apply.
- Roles are used to find all the rules that apply.
- The action is checked against each rule to find a match.
- If no matching rule is found, the action is then denied by default.
Remember that users and groups can be associated with, or bound to, multiple roles at the same time.
Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with.
The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin.
Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level.
8.1.2.1. Cluster role aggregation
The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation, where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources.
8.2. Projects and namespaces
A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces.
Namespaces provide a unique scope for:
- Named resources to avoid basic naming collisions.
- Delegated management authority to trusted users.
- The ability to limit community resource consumption.
Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.
A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects.
Projects can have a separate name
, displayName
, and description
.
-
The mandatory
name
is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. -
The optional
displayName
is how the project is displayed in the web console (defaults toname
). -
The optional
description
can be a more detailed description of the project and is also visible in the web console.
Each project scopes its own set of:
Object | Description |
---|---|
| Pods, services, replication controllers, etc. |
| Rules for which users can or cannot perform actions on objects. |
| Quotas for each kind of object that can be limited. |
| Service accounts act automatically with designated access to objects in the project. |
Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects.
Developers and administrators can interact with projects by using the CLI or the web console.
8.3. Default projects
OpenShift Container Platform comes with a number of default projects, and projects starting with openshift-
are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default
, kube-public
, kube-system
, openshift
, openshift-infra
, openshift-node
, and other system-created projects that have the openshift.io/run-level
label set to 0
or 1
. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
8.4. Viewing cluster roles and bindings
You can use the oc
CLI to view cluster roles and bindings by using the oc describe
command.
Prerequisites
-
Install the
oc
CLI. - Obtain permission to view the cluster roles and bindings.
Users with the cluster-admin
default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings.
Procedure
To view the cluster roles and their associated rule sets:
$ oc describe clusterrole.rbac
Example output
Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ...
To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:
$ oc describe clusterrolebinding.rbac
Example output
Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ...
8.5. Viewing local roles and bindings
You can use the oc
CLI to view local roles and bindings by using the oc describe
command.
Prerequisites
-
Install the
oc
CLI. Obtain permission to view the local roles and bindings:
-
Users with the
cluster-admin
default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. -
Users with the
admin
default cluster role bound locally can view and manage roles and bindings in that project.
-
Users with the
Procedure
To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project:
$ oc describe rolebinding.rbac
To view the local role bindings for a different project, add the
-n
flag to the command:$ oc describe rolebinding.rbac -n joe-project
Example output
Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project
8.6. Adding roles to users
You can use the oc adm
administrator CLI to manage the roles and bindings.
Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy
commands.
You can bind any of the default cluster roles to local users or groups in your project.
Procedure
Add a role to a user in a specific project:
$ oc adm policy add-role-to-user <role> <user> -n <project>
For example, you can add the
admin
role to thealice
user injoe
project by running:$ oc adm policy add-role-to-user admin alice -n joe
TipYou can alternatively apply the following YAML to add the role to the user:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice
View the local role bindings and verify the addition in the output:
$ oc describe rolebinding.rbac -n <project>
For example, to view the local role bindings for the
joe
project:$ oc describe rolebinding.rbac -n joe
Example output
Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe
- 1
- The
alice
user has been added to theadmins
RoleBinding
.
8.7. Creating a local role
You can create a local role for a project and then bind it to a user.
Procedure
To create a local role for a project, run the following command:
$ oc create role <name> --verb=<verb> --resource=<resource> -n <project>
In this command, specify:
-
<name>
, the local role’s name -
<verb>
, a comma-separated list of the verbs to apply to the role -
<resource>
, the resources that the role applies to -
<project>
, the project name
For example, to create a local role that allows a user to view pods in the
blue
project, run the following command:$ oc create role podview --verb=get --resource=pod -n blue
-
To bind the new role to a user, run the following command:
$ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue
8.8. Creating a cluster role
You can create a cluster role.
Procedure
To create a cluster role, run the following command:
$ oc create clusterrole <name> --verb=<verb> --resource=<resource>
In this command, specify:
-
<name>
, the local role’s name -
<verb>
, a comma-separated list of the verbs to apply to the role -
<resource>
, the resources that the role applies to
For example, to create a cluster role that allows a user to view pods, run the following command:
$ oc create clusterrole podviewonly --verb=get --resource=pod
-
8.9. Local role binding commands
When you manage a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the -n
flag. If it is not specified, then the current project is used.
You can use the following commands for local RBAC management.
Command | Description |
---|---|
| Indicates which users can perform an action on a resource. |
| Binds a specified role to specified users in the current project. |
| Removes a given role from specified users in the current project. |
| Removes specified users and all of their roles in the current project. |
| Binds a given role to specified groups in the current project. |
| Removes a given role from specified groups in the current project. |
| Removes specified groups and all of their roles in the current project. |
8.10. Cluster role binding commands
You can also manage cluster role bindings using the following operations. The -n
flag is not used for these operations because cluster role bindings use non-namespaced resources.
Command | Description |
---|---|
| Binds a given role to specified users for all projects in the cluster. |
| Removes a given role from specified users for all projects in the cluster. |
| Binds a given role to specified groups for all projects in the cluster. |
| Removes a given role from specified groups for all projects in the cluster. |
8.11. Creating a cluster admin
The cluster-admin
role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources.
Prerequisites
- You must have created a user to define as the cluster admin.
Procedure
Define the user as a cluster admin:
$ oc adm policy add-cluster-role-to-user cluster-admin <user>
8.12. Cluster role bindings for unauthenticated groups
Before OpenShift Container Platform 4.17, unauthenticated groups were allowed access to some cluster roles. Clusters updated from versions before OpenShift Container Platform 4.17 retain this access for unauthenticated groups.
For security reasons OpenShift Container Platform 4.17 does not allow unauthenticated groups to have default access to cluster roles.
There are use cases where it might be necessary to add system:unauthenticated
to a cluster role.
Cluster administrators can add unauthenticated users to the following cluster roles:
-
system:scope-impersonation
-
system:webhook
-
system:oauth-token-deleter
-
self-access-reviewer
Always verify compliance with your organization’s security standards when modifying unauthenticated access.
Chapter 9. Removing the kubeadmin user
9.1. The kubeadmin user
OpenShift Container Platform creates a cluster administrator, kubeadmin
, after the installation process completes.
This user has the cluster-admin
role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program’s output. For example:
INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>
9.2. Removing the kubeadmin user
After you define an identity provider and create a new cluster-admin
user, you can remove the kubeadmin
to improve cluster security.
If you follow this procedure before another user is a cluster-admin
, then OpenShift Container Platform must be reinstalled. It is not possible to undo this command.
Prerequisites
- You must have configured at least one identity provider.
-
You must have added the
cluster-admin
role to a user. - You must be logged in as an administrator.
Procedure
Remove the
kubeadmin
secrets:$ oc delete secrets kubeadmin -n kube-system
Chapter 10. Understanding and creating service accounts
10.1. Service accounts overview
A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user’s credentials.
When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user’s credentials. For example, service accounts can allow:
- Replication controllers to make API calls to create or delete pods.
- Applications inside containers to make API calls for discovery purposes.
- External applications to make API calls for monitoring or integration purposes.
Each service account’s user name is derived from its project and name:
system:serviceaccount:<project>:<name>
Every service account is also a member of two groups:
Group | Description |
---|---|
system:serviceaccounts | Includes all service accounts in the system. |
system:serviceaccounts:<project> | Includes all service accounts in the specified project. |
Each service account automatically contains two secrets:
- An API token
- Credentials for the OpenShift Container Registry
The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place.
10.2. Creating service accounts
You can create a service account in a project and grant it permissions by binding it to a role.
Procedure
Optional: To view the service accounts in the current project:
$ oc get sa
Example output
NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d
To create a new service account in the current project:
$ oc create sa <service_account_name> 1
- 1
- To create a service account in a different project, specify
-n <project_name>
.
Example output
serviceaccount "robot" created
TipYou can alternatively apply the following YAML to create the service account:
apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>
Optional: View the secrets for the service account:
$ oc describe sa robot
Example output
Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>
10.3. Examples of granting roles to service accounts
You can grant roles to service accounts in the same way that you grant roles to a regular user account.
You can modify the service accounts for the current project. For example, to add the
view
role to therobot
service account in thetop-secret
project:$ oc policy add-role-to-user view system:serviceaccount:top-secret:robot
TipYou can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret
You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the
-z
flag and specify the<service_account_name>
$ oc policy add-role-to-user <role_name> -z <service_account_name>
ImportantIf you want to grant access to a specific service account in a project, use the
-z
flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account.TipYou can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>
To modify a different namespace, you can use the
-n
option to indicate the project namespace it applies to, as shown in the following examples.For example, to allow all service accounts in all projects to view resources in the
my-project
project:$ oc policy add-role-to-group view system:serviceaccounts -n my-project
TipYou can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts
To allow all service accounts in the
managers
project to edit resources in themy-project
project:$ oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project
TipYou can alternatively apply the following YAML to add the role:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers
Chapter 11. Using service accounts in applications
11.1. Service accounts overview
A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user’s credentials.
When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user’s credentials. For example, service accounts can allow:
- Replication controllers to make API calls to create or delete pods.
- Applications inside containers to make API calls for discovery purposes.
- External applications to make API calls for monitoring or integration purposes.
Each service account’s user name is derived from its project and name:
system:serviceaccount:<project>:<name>
Every service account is also a member of two groups:
Group | Description |
---|---|
system:serviceaccounts | Includes all service accounts in the system. |
system:serviceaccounts:<project> | Includes all service accounts in the specified project. |
Each service account automatically contains two secrets:
- An API token
- Credentials for the OpenShift Container Registry
The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place.
11.2. Default service accounts
Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project.
11.2.1. Default cluster service accounts
Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project (openshift-infra
) at server start, and given the following roles cluster-wide:
Service account | Description |
---|---|
|
Assigned the |
|
Assigned the |
|
Assigned the |
11.2.2. Default project service accounts and roles
Three service accounts are automatically created in each project:
Service account | Usage |
---|---|
|
Used by build pods. It is given the Note
The |
|
Used by deployment pods and given the Note
The |
| Used to run all other pods unless they specify a different service account. |
All service accounts in a project are given the system:image-puller
role, which allows pulling images from any image stream in the project using the internal container image registry.
11.2.3. Automatically generated image pull secrets
By default, OpenShift Container Platform creates an image pull secret for each service account.
Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created.
After upgrading to 4.17, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform.
This image pull secret is necessary to integrate the OpenShift image registry into the cluster’s user authentication and authorization system.
However, if you do not enable the ImageRegistry
capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator’s configuration, an image pull secret is not generated for each service account.
When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically.
11.3. Creating service accounts
You can create a service account in a project and grant it permissions by binding it to a role.
Procedure
Optional: To view the service accounts in the current project:
$ oc get sa
Example output
NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d
To create a new service account in the current project:
$ oc create sa <service_account_name> 1
- 1
- To create a service account in a different project, specify
-n <project_name>
.
Example output
serviceaccount "robot" created
TipYou can alternatively apply the following YAML to create the service account:
apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>
Optional: View the secrets for the service account:
$ oc describe sa robot
Example output
Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>
Chapter 12. Using a service account as an OAuth client
12.1. Service accounts as OAuth clients
You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account’s own namespace:
-
user:info
-
user:check-access
-
role:<any_role>:<service_account_namespace>
-
role:<any_role>:<service_account_namespace>:!
When using a service account as an OAuth client:
-
client_id
issystem:serviceaccount:<service_account_namespace>:<service_account_name>
. client_secret
can be any of the API tokens for that service account. For example:$ oc sa get-token <service_account_name>
-
To get
WWW-Authenticate
challenges, set anserviceaccounts.openshift.io/oauth-want-challenges
annotation on the service account totrue
. -
redirect_uri
must match an annotation on the service account.
12.1.1. Redirect URIs for service accounts as OAuth clients
Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi.
or serviceaccounts.openshift.io/oauth-redirectreference.
such as:
serviceaccounts.openshift.io/oauth-redirecturi.<name>
In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example:
"serviceaccounts.openshift.io/oauth-redirecturi.first": "https://example.com" "serviceaccounts.openshift.io/oauth-redirecturi.second": "https://other.com"
The first
and second
postfixes in the above example are used to separate the two valid redirect URIs.
In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference.
prefix come into play.
For example:
"serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format:
{ "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } }
Now you can see that an OAuthRedirectReference
allows us to reference the route named jenkins
. Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference
is:
{ "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } }
- 1
kind
refers to the type of the object being referenced. Currently, onlyroute
is supported.- 2
name
refers to the name of the object. The object must be in the same namespace as the service account.- 3
group
refers to the group of the object. Leave this blank, as the group for a route is the empty string.
Both annotation prefixes can be combined to override the data provided by the reference object. For example:
"serviceaccounts.openshift.io/oauth-redirecturi.first": "custompath" "serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
The first
postfix is used to tie the annotations together. Assuming that the jenkins
route had an Ingress of https://example.com
, now https://example.com/custompath
is considered valid, but https://example.com
is not. The format for partially supplying override data is as follows:
Type | Syntax |
---|---|
Scheme | "https://" |
Hostname | "//website.com" |
Port | "//:8000" |
Path | "examplepath" |
Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior.
Any combination of the above syntax can be combined using the following format:
<scheme:>//<hostname><:port>/<path>
The same object can be referenced more than once for more flexibility:
"serviceaccounts.openshift.io/oauth-redirecturi.first": "custompath" "serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}" "serviceaccounts.openshift.io/oauth-redirecturi.second": "//:8000" "serviceaccounts.openshift.io/oauth-redirectreference.second": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}"
Assuming that the route named jenkins
has an Ingress of https://example.com
, then both https://example.com:8000
and https://example.com/custompath
are considered valid.
Static and dynamic annotations can be used at the same time to achieve the desired behavior:
"serviceaccounts.openshift.io/oauth-redirectreference.first": "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"jenkins\"}}" "serviceaccounts.openshift.io/oauth-redirecturi.second": "https://other.com"
Chapter 13. Scoping tokens
13.1. About scoping tokens
You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods.
A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the cluster-admin
role can create scoped tokens.
Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules
. Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks.
13.1.1. User scopes
User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you:
-
user:full
- Allows full read/write access to the API with all of the user’s permissions. -
user:info
- Allows read-only access to information about the user, such as name and groups. -
user:check-access
- Allows access toself-localsubjectaccessreviews
andself-subjectaccessreviews
. These are the variables where you pass an empty user and groups in your request object. -
user:list-projects
- Allows read-only access to list the projects the user has access to.
13.1.2. Role scope
The role scope allows you to have the same level of access as a given role filtered by namespace.
role:<cluster-role name>:<namespace or * for all>
- Limits the scope to the rules specified by the cluster-role, but only in the specified namespace .NoteCaveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like
edit
as being an escalating role, but with access to a secret it is.-
role:<cluster-role name>:<namespace or * for all>:!
- This is similar to the example above, except that including the bang causes this scope to allow escalating access.
13.2. Adding unauthenticated groups to cluster roles
As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary.
You can add unauthenticated users to the following cluster roles:
-
system:scope-impersonation
-
system:webhook
-
system:oauth-token-deleter
-
self-access-reviewer
Always verify compliance with your organization’s security standards when modifying unauthenticated access.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a YAML file named
add-<cluster_role>-unauth.yaml
and add the following content:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated
Apply the configuration by running the following command:
$ oc apply -f add-<cluster_role>.yaml
Chapter 14. Using bound service account tokens
You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as OpenShift Container Platform on AWS IAM or Google Cloud Platform IAM.
14.1. About bound service account tokens
You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API.
14.2. Configuring bound service account tokens using volume projection
You can configure pods to request bound service account tokens by using volume projection.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have created a service account. This procedure assumes that the service account is named
build-robot
.
Procedure
Optional: Set the service account issuer.
This step is typically not required if the bound tokens are used only within the cluster.
ImportantIf you change the service account issuer to a custom one, the previous service account issuer is still trusted for the next 24 hours.
You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes.
Edit the
cluster
Authentication
object:$ oc edit authentications cluster
Set the
spec.serviceAccountIssuer
field to the desired service account issuer value:spec: serviceAccountIssuer: https://test.default.svc 1
- 1
- This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is
https://kubernetes.default.svc
.
- Save the file to apply the changes.
Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command:
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 12 1
- 1
- In this example, the latest revision number is
12
.
If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again.
-
3 nodes are at revision 11; 0 nodes have achieved new revision 12
-
2 nodes are at revision 11; 1 nodes are at revision 12
Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster.
Perform a rolling node restart:
WarningIt is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster.
Restart nodes sequentially. Wait for the node to become fully available before restarting the next node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again.
Manually restart all pods in the cluster:
WarningBe aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted.
Run the following command:
$ for I in $(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n $I; \ sleep 1; \ done
Configure a pod to use a bound service account token by using volume projection.
Create a file called
pod-projected-svc-token.yaml
with the following contents:apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6
- 1
- Prevents containers from running as root to minimize compromise risks.
- 2
- Sets the default seccomp profile, limiting to essential system calls, to reduce risks.
- 3
- A reference to an existing service account.
- 4
- The path relative to the mount point of the file to project the token into.
- 5
- Optionally set the expiration of the service account token, in seconds. The default value is 3600 seconds (1 hour), and this value must be at least 600 seconds (10 minutes). The kubelet starts trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.
- 6
- Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server.
NoteIn order to prevent unexpected failure, OpenShift Container Platform overrides the
expirationSeconds
value to be one year from the initial token generation with the--service-account-extend-token-expiration
default oftrue
. You cannot change this setting.Create the pod:
$ oc create -f pod-projected-svc-token.yaml
The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration.
The application that uses the bound token must handle reloading the token when it rotates.
The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours.
14.3. Creating bound service account tokens outside the pod
Prerequisites
-
You have created a service account. This procedure assumes that the service account is named
build-robot
.
Procedure
Create the bound service account token outside the pod by running the following command:
$ oc create token build-robot
Example output
eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ
Additional resources
Chapter 15. Managing security context constraints
In OpenShift Container Platform, you can use security context constraints (SCCs) to control permissions for the pods in your cluster.
Default SCCs are created during installation and when you install some Operators or other components. As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI (oc
).
Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs.
Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints.
15.1. About security context constraints
Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
Security context constraints allow an administrator to control:
-
Whether a pod can run privileged containers with the
allowPrivilegedContainer
flag -
Whether a pod is constrained with the
allowPrivilegeEscalation
flag - The capabilities that a container can request
- The use of host directories as volumes
- The SELinux context of the container
- The container user ID
- The use of host namespaces and networking
-
The allocation of an
FSGroup
that owns the pod volumes - The configuration of allowable supplemental groups
- Whether a container requires write access to its root file system
- The usage of volume types
-
The configuration of allowable
seccomp
profiles
Do not set the openshift.io/run-level
label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level
label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged.
15.1.1. Default security context constraints
The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform.
Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs.
Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints.
Security context constraint | Description |
---|---|
|
Provides all features of the |
| Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution. |
|
Provides all the features of the Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. |
| Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning
If additional workloads are run on control plane hosts, use caution when providing access to |
|
Like the
|
| Used for the Prometheus node exporter. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. |
|
Provides all features of the |
|
Like the
|
| Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context. Warning This is the most relaxed SCC and should be used only for cluster administration. Grant with caution.
The
Note
Setting |
| Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace.
The
In clusters that were upgraded from OpenShift Container Platform 4.10 or earlier, this SCC is available for use by any authenticated user. The |
|
Like the
This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users. Note
The |
15.1.2. Security context constraints settings
Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories:
Category | Description |
---|---|
Controlled by a boolean |
Fields of this type default to the most restrictive value. For example, |
Controlled by an allowable set | Fields of this type are checked against the set to ensure their value is allowed. |
Controlled by a strategy | Items that have a strategy to generate a value provide:
|
CRI-O has the following default list of capabilities that are allowed for each container of a pod:
-
CHOWN
-
DAC_OVERRIDE
-
FSETID
-
FOWNER
-
SETGID
-
SETUID
-
SETPCAP
-
NET_BIND_SERVICE
-
KILL
The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities
, defaultAddCapabilities
, and requiredDropCapabilities
parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container.
You can drop all capabilites from containers by setting the requiredDropCapabilities
parameter to ALL
. This is what the restricted-v2
SCC does.
15.1.3. Security context constraints strategies
RunAsUser
MustRunAs
- Requires arunAsUser
to be configured. Uses the configuredrunAsUser
as the default. Validates against the configuredrunAsUser
.Example
MustRunAs
snippet... runAsUser: type: MustRunAs uid: <id> ...
MustRunAsRange
- Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range.Example
MustRunAsRange
snippet... runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue> ...
MustRunAsNonRoot
- Requires that the pod be submitted with a non-zerorunAsUser
or have theUSER
directive defined in the image. No default provided.Example
MustRunAsNonRoot
snippet... runAsUser: type: MustRunAsNonRoot ...
RunAsAny
- No default provided. Allows anyrunAsUser
to be specified.Example
RunAsAny
snippet... runAsUser: type: RunAsAny ...
SELinuxContext
-
MustRunAs
- RequiresseLinuxOptions
to be configured if not using pre-allocated values. UsesseLinuxOptions
as the default. Validates againstseLinuxOptions
. -
RunAsAny
- No default provided. Allows anyseLinuxOptions
to be specified.
SupplementalGroups
-
MustRunAs
- Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges. -
RunAsAny
- No default provided. Allows anysupplementalGroups
to be specified.
FSGroup
-
MustRunAs
- Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range. -
RunAsAny
- No default provided. Allows anyfsGroup
ID to be specified.
15.1.4. Controlling volumes
The usage of specific volume types can be controlled by setting the volumes
field of the SCC.
The allowable values of this field correspond to the volume sources that are defined when creating a volume:
-
awsElasticBlockStore
-
azureDisk
-
azureFile
-
cephFS
-
cinder
-
configMap
-
csi
-
downwardAPI
-
emptyDir
-
fc
-
flexVolume
-
flocker
-
gcePersistentDisk
-
ephemeral
-
gitRepo
-
glusterfs
-
hostPath
-
iscsi
-
nfs
-
persistentVolumeClaim
-
photonPersistentDisk
-
portworxVolume
-
projected
-
quobyte
-
rbd
-
scaleIO
-
secret
-
storageos
-
vsphereVolume
- * (A special value to allow the use of all volume types.)
-
none
(A special value to disallow the use of all volumes types. Exists only for backwards compatibility.)
The recommended minimum set of allowed volumes for new SCCs are configMap
, downwardAPI
, emptyDir
, persistentVolumeClaim
, secret
, and projected
.
This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform.
For backwards compatibility, the usage of allowHostDirVolumePlugin
overrides settings in the volumes
field. For example, if allowHostDirVolumePlugin
is set to false but allowed in the volumes
field, then the hostPath
value will be removed from volumes
.
15.1.5. Admission control
Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user.
In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod.
The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account.
When you create a workload resource, such as deployment, only the service account is used to find the SCCs and admit the pods when they are created.
Admission uses the following approach to create the final security context for the pod:
- Retrieve all SCCs available for use.
- Generate field values for security context settings that were not specified on the request.
- Validate the final settings against the available constraints.
If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected.
A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated:
These examples are in the context of a strategy using the pre-allocated values.
An FSGroup SCC strategy of MustRunAs
If the pod defines a fsGroup
ID, then that ID must equal the default fsGroup
ID. Otherwise, the pod is not validated by that SCC and the next SCC is evaluated.
If the SecurityContextConstraints.fsGroup
field has value RunAsAny
and the pod specification omits the Pod.spec.securityContext.fsGroup
, then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail.
A SupplementalGroups
SCC strategy of MustRunAs
If the pod specification defines one or more supplementalGroups
IDs, then the pod’s IDs must equal one of the IDs in the namespace’s openshift.io/sa.scc.supplemental-groups
annotation. Otherwise, the pod is not validated by that SCC and the next SCC is evaluated.
If the SecurityContextConstraints.supplementalGroups
field has value RunAsAny
and the pod specification omits the Pod.spec.securityContext.supplementalGroups
, then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail.
15.1.6. Security context constraints prioritization
Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller.
A priority value of 0
is the lowest possible priority. A nil priority is considered a 0
, or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting.
When the complete set of available SCCs is determined, the SCCs are ordered in the following manner:
- The highest priority SCCs are ordered first.
- If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive.
- If both the priorities and restrictions are equal, the SCCs are sorted by name.
By default, the anyuid
SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser
in the pod’s SecurityContext
.
15.2. About pre-allocated security context constraints values
The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod.
The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification:
-
A
RunAsUser
strategy ofMustRunAsRange
with no minimum or maximum set. Admission looks for theopenshift.io/sa.scc.uid-range
annotation to populate range fields. -
An
SELinuxContext
strategy ofMustRunAs
with no level set. Admission looks for theopenshift.io/sa.scc.mcs
annotation to populate the level. -
A
FSGroup
strategy ofMustRunAs
. Admission looks for theopenshift.io/sa.scc.supplemental-groups
annotation. -
A
SupplementalGroups
strategy ofMustRunAs
. Admission looks for theopenshift.io/sa.scc.supplemental-groups
annotation.
During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy:
-
RunAsAny
andMustRunAsNonRoot
strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification. -
MustRunAs
(single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace’s default parameter value also appears in the pod’s groups. -
MustRunAsRange
andMustRunAs
(range-based) strategies provide the minimum value of the range. As with a single valueMustRunAs
strategy, the namespace’s default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range.
FSGroup
and SupplementalGroups
strategies fall back to the openshift.io/sa.scc.uid-range
annotation if the openshift.io/sa.scc.supplemental-groups
annotation does not exist on the namespace. If neither exists, the SCC is not created.
By default, the annotation-based FSGroup
strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3
, the FSGroup
strategy configures itself with a minimum and maximum value of 1
. If you want to allow more groups to be accepted for the FSGroup
field, you can configure a custom SCC that does not use the annotation.
The openshift.io/sa.scc.supplemental-groups
annotation accepts a comma-delimited list of blocks in the format of <start>/<length
or <start>-<end>
. The openshift.io/sa.scc.uid-range
annotation accepts only a single block.
15.3. Example security context constraints
The following examples show the security context constraints (SCC) format and annotations:
Annotated privileged
SCC
allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: 5 - KILL - MKNOD - SETUID - SETGID runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'
- 1
- A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol
*
allows any capabilities. - 2
- A list of additional capabilities that are added to any pod.
- 3
- The
FSGroup
strategy, which dictates the allowable values for the security context. - 4
- The groups that can access this SCC.
- 5
- A list of capabilities to drop from a pod. Or, specify
ALL
to drop all capabilities. - 6
- The
runAsUser
strategy type, which dictates the allowable values for the security context. - 7
- The
seLinuxContext
strategy type, which dictates the allowable values for the security context. - 8
- The
supplementalGroups
strategy, which dictates the allowable supplemental groups for the security context. - 9
- The users who can access this SCC.
- 10
- The allowable volume types for the security context. In the example,
*
allows the use of all volume types.
The users
and groups
fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2
SCC.
Without explicit runAsUser
setting
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext: 1
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
- 1
- When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because the
restricted-v2
SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. Therestricted-v2
SCC usesMustRunAsRange
strategy for constraining and defaulting the possible values of thesecurityContext.runAsUser
field. The admission plugin will look for theopenshift.io/sa.scc.uid-range
annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will haverunAsUser
equal to the first value of the range that is hard to predict because every project has different ranges.
With explicit runAsUser
setting
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000 1
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
- 1
- A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request.
This configuration is valid for SELinux, fsGroup, and Supplemental Groups.
15.4. Creating security context constraints
If the default security context constraints (SCCs) do not satisfy your application workload requirements, you can create a custom SCC by using the OpenShift CLI (oc
).
Creating and modifying your own SCCs are advanced operations that might cause instability to your cluster. If you have questions about using your own SCCs, contact Red Hat Support. For information about contacting Red Hat support, see Getting support.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in to the cluster as a user with the
cluster-admin
role.
Procedure
Define the SCC in a YAML file named
scc-admin.yaml
:kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group
Optionally, you can drop specific capabilities for an SCC by setting the
requiredDropCapabilities
field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specifyALL
. For example, to create an SCC that drops theKILL
,MKNOD
, andSYS_CHROOT
capabilities, add the following to the SCC object:requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT
NoteYou cannot list a capability in both
allowedCapabilities
andrequiredDropCapabilities
.CRI-O supports the same list of capability values that are found in the Docker documentation.
Create the SCC by passing in the file:
$ oc create -f scc-admin.yaml
Example output
securitycontextconstraints "scc-admin" created
Verification
Verify that the SCC was created:
$ oc get scc scc-admin
Example output
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]
15.5. Configuring a workload to require a specific SCC
You can configure a workload to require a certain security context constraint (SCC). This is useful in scenarios where you want to pin a specific SCC to the workload or if you want to prevent your required SCC from being preempted by another SCC in the cluster.
To require a specific SCC, set the openshift.io/required-scc
annotation on your workload. You can set this annotation on any resource that can set a pod manifest template, such as a deployment or daemon set.
The SCC must exist in the cluster and must be applicable to the workload, otherwise pod admission fails. An SCC is considered applicable to the workload if the user creating the pod or the pod’s service account has use
permissions for the SCC in the pod’s namespace.
Do not change the openshift.io/required-scc
annotation in the live pod’s manifest, because doing so causes the pod admission to fail. To change the required SCC, update the annotation in the underlying pod template, which causes the pod to be deleted and re-created.
Prerequisites
- The SCC must exist in the cluster.
Procedure
Create a YAML file for the deployment and specify a required SCC by setting the
openshift.io/required-scc
annotation:Example
deployment.yaml
apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: # ... template: metadata: annotations: openshift.io/required-scc: "my-scc" 1 # ...
- 1
- Specify the name of the SCC to require.
Create the resource by running the following command:
$ oc create -f deployment.yaml
Verification
Verify that the deployment used the specified SCC:
View the value of the pod’s
openshift.io/scc
annotation by running the following command:$ oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\.io\/scc}{"\n"}' 1
- 1
- Replace
<pod_name>
with the name of your deployment pod.
Examine the output and confirm that the displayed SCC matches the SCC that you defined in the deployment:
Example output
my-scc
15.6. Role-based access to security context constraints
You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default
, kube-public
, kube-system
, openshift
, openshift-infra
, openshift-node
, and other system-created projects that have the openshift.io/run-level
label set to 0
or 1
. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
To include access to SCCs for your role, specify the scc
resource when creating a role.
$ oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>
This results in the following role definition:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ... name: role-name 1 namespace: namespace 2 ... rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use
- 1
- The role’s name.
- 2
- Namespace of the defined role. Defaults to
default
if not specified. - 3
- The API group that includes the
SecurityContextConstraints
resource. Automatically defined whenscc
is specified as a resource. - 4
- An example name for an SCC you want to have access.
- 5
- Name of the resource group that allows users to specify SCC names in the
resourceNames
field. - 6
- A list of verbs to apply to the role.
A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name
.
Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use
on SCC resources, including the restricted-v2
SCC.
15.7. Reference of security context constraints commands
You can manage security context constraints (SCCs) in your instance as normal API objects by using the OpenShift CLI (oc
).
You must have cluster-admin
privileges to manage SCCs.
15.7.1. Listing security context constraints
To get a current list of SCCs:
$ oc get scc
Example output
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","persistentVolumeClaim","projected","secret"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","nfs","persistentVolumeClaim","projected","secret"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostnetwork-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] nonroot-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] privileged true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] restricted-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]
15.7.2. Examining security context constraints
You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to.
For example, to examine the restricted
SCC:
$ oc describe scc restricted
Example output
Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>
To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.
15.7.3. Updating security context constraints
If your custom SCC no longer satisfies your application workloads requirements, you can update your SCC by using the OpenShift CLI (oc
).
To update an existing SCC:
$ oc edit scc <scc_name>
To preserve customized SCCs during upgrades, do not edit settings on the default SCCs.
15.7.4. Deleting security context constraints
If you no longer require your custom SCC, you can delete the SCC by using the OpenShift CLI (oc
).
To delete an SCC:
$ oc delete scc <scc_name>
Do not delete default SCCs. If you delete a default SCC, it is regenerated by the Cluster Version Operator.
15.8. Additional resources
Chapter 16. Understanding and managing pod security admission
Pod security admission is an implementation of the Kubernetes pod security standards. Use pod security admission to restrict the behavior of pods.
16.1. About pod security admission
OpenShift Container Platform includes Kubernetes pod security admission. Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run.
Globally, the privileged
profile is enforced, and the restricted
profile is used for warnings and audits.
You can also configure the pod security admission settings at the namespace level.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components.
The following default projects are considered highly privileged: default
, kube-public
, kube-system
, openshift
, openshift-infra
, openshift-node
, and other system-created projects that have the openshift.io/run-level
label set to 0
or 1
. Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects.
16.1.1. Pod security admission modes
You can configure the following pod security admission modes for a namespace:
Mode | Label | Description |
---|---|---|
|
| Rejects a pod from admission if it does not comply with the set profile |
|
| Logs audit events if a pod does not comply with the set profile |
|
| Displays warnings if a pod does not comply with the set profile |
16.1.2. Pod security admission profiles
You can set each of the pod security admission modes to one of the following profiles:
Profile | Description |
---|---|
| Least restrictive policy; allows for known privilege escalation |
| Minimally restrictive policy; prevents known privilege escalations |
| Most restrictive policy; follows current pod hardening best practices |
16.1.3. Privileged namespaces
The following system namespaces are always set to the privileged
pod security admission profile:
-
default
-
kube-public
-
kube-system
You cannot change the pod security profile for these privileged namespaces.
16.1.4. Pod security admission and security context constraints
Pod security admission standards and security context constraints are reconciled and enforced by two independent controllers. The two controllers work independently using the following processes to enforce security policies:
-
The security context constraint controller may mutate some security context fields per the pod’s assigned SCC. For example, if the seccomp profile is empty or not set and if the pod’s assigned SCC enforces
seccompProfiles
field to beruntime/default
, the controller sets the default type toRuntimeDefault
. - The security context constraint controller validates the pod’s security context against the matching SCC.
- The pod security admission controller validates the pod’s security context against the pod security standard assigned to the namespace.
16.2. About pod security admission synchronization
In addition to the global pod security admission control configuration, a controller applies pod security admission control warn
and audit
labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace.
The controller examines ServiceAccount
object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn
and audit
labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created.
Namespace labeling is based on consideration of namespace-local service account privileges.
Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling.
16.2.1. Pod security admission synchronization namespace exclusions
Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-*
prefixed namespaces, but you can enable synchronization on them later.
If a pod security admission label (pod-security.kubernetes.io/<mode>
) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label.
If necessary, you can enable synchronization again by using one of the following methods:
- By removing the modified pod security admission label from the namespace
By setting the
security.openshift.io/scc.podSecurityLabelSync
label totrue
If you force synchronization by adding this label, then any modified pod security admission labels will be overwritten.
Permanently disabled namespaces
Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled:
-
default
-
kube-node-lease
-
kube-system
-
kube-public
-
openshift
-
All system-created namespaces that are prefixed with
openshift-
, except foropenshift-operators
Initially disabled namespaces
By default, all namespaces that have an openshift-
prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-*
namespaces and for the openshift-operators
namespace.
You cannot enable synchronization for any system-created openshift-*
namespaces, except for openshift-operators
.
If an Operator is installed in a user-created openshift-*
namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace.
16.3. Controlling pod security admission synchronization
You can enable or disable automatic pod security admission synchronization for most namespaces.
You cannot enable pod security admission synchronization on some system-created namespaces. For more information, see Pod security admission synchronization namespace exclusions.
Procedure
For each namespace that you want to configure, set a value for the
security.openshift.io/scc.podSecurityLabelSync
label:To disable pod security admission label synchronization in a namespace, set the value of the
security.openshift.io/scc.podSecurityLabelSync
label tofalse
.Run the following command:
$ oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false
To enable pod security admission label synchronization in a namespace, set the value of the
security.openshift.io/scc.podSecurityLabelSync
label totrue
.Run the following command:
$ oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true
NoteUse the
--overwrite
flag to overwrite the value if this label is already set on the namespace.
Additional resources
16.4. Configuring pod security admission for a namespace
You can configure the pod security admission settings at the namespace level. For each of the pod security admission modes on the namespace, you can set which pod security admission profile to use.
Procedure
For each pod security admission mode that you want to set on a namespace, run the following command:
$ oc label namespace <namespace> \ 1 pod-security.kubernetes.io/<mode>=<profile> \ 2 --overwrite
16.5. About pod security admission alerts
A PodSecurityViolation
alert is triggered when the Kubernetes API server reports that there is a pod denial on the audit level of the pod security admission controller. This alert persists for one day.
View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a workload is likely to fail admission if global enforcement is set to the restricted
pod security level.
For assistance in identifying pod security admission violation audit events, see Audit annotations in the Kubernetes documentation.
16.5.1. Identifying pod security violations
The PodSecurityViolation
alert does not provide details on which workloads are causing pod security violations. You can identify the affected workloads by reviewing the Kubernetes API server audit logs. This procedure uses the must-gather
tool to gather the audit logs and then searches for the pod-security.kubernetes.io/audit-violations
annotation.
Prerequisites
-
You have installed
jq
. -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
To gather the audit logs, enter the following command:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
To output the affected workload details, enter the following command:
$ zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name' \ | sort | uniq -c
Replace
<archive_id>
and<image_digest_id>
with the actual path names.Example output
1 test-namespace my-pod
16.6. Additional resources
Chapter 17. Impersonating the system:admin user
17.1. API impersonation
You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation.
17.2. Impersonating the system:admin user
You can grant a user permission to impersonate system:admin
, which grants them cluster administrator permissions.
Procedure
To grant a user permission to impersonate
system:admin
, run the following command:$ oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>
TipYou can alternatively apply the following YAML to grant permission to impersonate
system:admin
:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>
17.3. Impersonating the system:admin group
When a system:admin
user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2>
parameters in the command to impersonate the associated groups.
Procedure
To grant a user permission to impersonate a
system:admin
by impersonating the associated cluster administration groups, run the following command:$ oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \ --as-group=<group1> --as-group=<group2>
17.4. Adding unauthenticated groups to cluster roles
As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary.
You can add unauthenticated users to the following cluster roles:
-
system:scope-impersonation
-
system:webhook
-
system:oauth-token-deleter
-
self-access-reviewer
Always verify compliance with your organization’s security standards when modifying unauthenticated access.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a YAML file named
add-<cluster_role>-unauth.yaml
and add the following content:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated
Apply the configuration by running the following command:
$ oc apply -f add-<cluster_role>.yaml
Chapter 18. Syncing LDAP groups
As an administrator, you can use groups to manage users, change their permissions, and enhance collaboration. Your organization may have already created user groups and stored them in an LDAP server. OpenShift Container Platform can sync those LDAP records with internal OpenShift Container Platform records, enabling you to manage your groups in one place. OpenShift Container Platform currently supports group sync with LDAP servers using three common schemas for defining group membership: RFC 2307, Active Directory, and augmented Active Directory.
For more information on configuring LDAP, see Configuring an LDAP identity provider.
You must have cluster-admin
privileges to sync groups.
18.1. About configuring LDAP sync
Before you can run LDAP sync, you need a sync configuration file. This file contains the following LDAP client configuration details:
- Configuration for connecting to your LDAP server.
- Sync configuration options that are dependent on the schema used in your LDAP server.
- An administrator-defined list of name mappings that maps OpenShift Container Platform group names to groups in your LDAP server.
The format of the configuration file depends upon the schema you are using: RFC 2307, Active Directory, or augmented Active Directory.
- LDAP client configuration
- The LDAP client configuration section of the configuration defines the connections to your LDAP server.
The LDAP client configuration section of the configuration defines the connections to your LDAP server.
LDAP client configuration
url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5
- 1
- The connection protocol, IP address of the LDAP server hosting your database, and the port to connect to, formatted as
scheme://host:port
. - 2
- Optional distinguished name (DN) to use as the Bind DN. OpenShift Container Platform uses this if elevated privilege is required to retrieve entries for the sync operation.
- 3
- Optional password to use to bind. OpenShift Container Platform uses this if elevated privilege is necessary to retrieve entries for the sync operation. This value may also be provided in an environment variable, external file, or encrypted file.
- 4
- When
false
, secure LDAP (ldaps://
) URLs connect using TLS, and insecure LDAP (ldap://
) URLs are upgraded to TLS. Whentrue
, no TLS connection is made to the server and you cannot useldaps://
URL schemes. - 5
- The certificate bundle to use for validating server certificates for the configured URL. If empty, OpenShift Container Platform uses system-trusted roots. This only applies if
insecure
is set tofalse
.
- LDAP query definition
- Sync configurations consist of LDAP query definitions for the entries that are required for synchronization. The specific definition of an LDAP query depends on the schema used to store membership information in the LDAP server.
LDAP query definition
baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6
- 1
- The distinguished name (DN) of the branch of the directory where all searches will start from. It is required that you specify the top of your directory tree, but you can also specify a subtree in the directory.