此内容没有您所选择的语言版本。
Chapter 7. Securing data
To prevent unauthorized access to data, you can implement the following measures:
- Configure integration with Red Hat Single Sign-On in OpenShift to enable OpenID-Connect authentication and OAuth2 authorization.
- Apply role-based access controls to your virtual database.
- Configure 3Scale to secure OData API endpoints.
- Encrypt communications between database clients (ODBC and JDBC) and the virtual database.
7.1. Securing OData APIs for a virtual database
You can integrate data virtualization with Red Hat Single Sign-On and Red Hat 3scale API Management to apply advanced authorization and authentication controls to the OData endpoints for your virtual database services.
The Red Hat Single Sign-On technology uses OpenID-Connect as the authentication mechanism to secure the API, and uses OAuth2 as the authorization mechanism. You can integrate data virtualization with Red Hat Single Sign-On alone, or along with 3scale.
By default, after you create a virtual database, the OData interface to it is discoverable by 3scale, as long as the 3scale system is defined to same cluster and namespace. By securing access to OData APIs through Red Hat Single Sign-On, you can define user roles and implement role-based access to the API endpoints. After you complete the configuration, you can control access in the virtual database at the level of the view, column, or data source. Only authorized users can access the API endpoint, and each user is permitted a level of access that is appropriate to their role (role-based access). By using 3scale as a gateway to your API, you can take advantage of 3scale’s API management features, allowing you to tie API usage to authorized accounts for tracking and billing.
When a user logs in, 3scale negotiates authentication with the Red Hat Single Sign-On package. If the authentication succeeds, 3scale passes a security token to the OData API for verification. The OData API then reads permissions from the token and applies them to the data roles that are defined for the virtual database.
Prerequisites
- Red Hat Single Sign-On is running in the OpenShift cluster. For more information about deploying Red Hat Single Sign-On, see the Red Hat Single Sign-On for OpenShift documentation.
- You have Red Hat 3scale API Management installed in the OpenShift cluster that hosts your virtual database.
You have configured integration between 3scale and Red Hat Single Sign-On. For more information, see Configuring Red Hat Single Sign-On integration in Using the Developer Portal.
- You have assigned the realm-management and manage-clients roles.
- You created API users and specified credentials.
- You configured 3scale to use OpenID-Connect as the authentication mechanism and OAuth2 as the authorization mechanism.
7.1.1. Configuring Red Hat Single Sign-On to secure OData
You must add configuration settings in Red Hat Single Sign-On to enable integration with data virtualization.
Prerequisites
- Red Hat Single Sign-On is running in the OpenShift cluster. For information about deploying Red Hat Single Sign-On, see the link:Red Hat Single Sign-On for OpenShift[Red Hat Single Sign-On] documentation.
- You run the Data Virtualization Operator to create a virtual database in the cluster where Red Hat Single Sign-On is running.
Procedure
- From a browser, log in to the Red Hat Single Sign-On Admin Console.
Create a realm for your data virtualization service.
- From the menu for the master realm, hover over Master and then click Add realm.
-
Type a name for the realm, such as
datavirt
, and then click Create.
Add roles.
- From the menu, click Roles.
- Click Add Role.
- Type a name for the role, for example ReadRole, and then click Save.
- Create other roles as needed to map to the roles in your organization’s LDAP or Active Directory. For information about federating user data from external identity providers, see the Server Administration Guide.
Add users.
- From the menu, click Users, and then click Add user.
-
On the Add user form, type a user name, for example,
user
, specify other user properties that you want to assign, and then click Save.
Only the user field is mandatory. - From the details page for the user, click the Credentials tab.
- Type and confirm a password for the user, click Reset Password, and then click Change password when prompted.
Assign roles to the user.
- Click the Role Mappings tab.
- In the Available Roles field, click ReadRole and then click Add selected.
- Create a second user called developer, and assign a password and roles to the user.
Create a data virtualization client entry.
The client entry represents the data virtualization service as an SSO client application. .. From the menu, click Clients. .. Click Create to open the Add Client page. .. In the Client ID field, type a name for the client, for example,
dv-client
. .. In the Client Protocol field, choose openid-connect. .. Leave the Root URL field blank, and click Save.
You are now ready to add SSO properties to the CR for the data virtualization service.
7.1.2. Adding SSO properties to the custom resource file
After you configure Red Hat Single Sign-On to secure the OData endpoints for a virtual database, you must configure the virtual database to integrate with Red Hat Single Sign-On. To configure the virtual database to use SSO, you add SSO properties to the CR that you used when you first deployed the service (for example, dv-customer.yaml
). You add the properties as environment variables. The SSO configuration takes effect after you redeploy the virtual database.
In this procedure you add the following Red Hat Single Sign-On properties to the CR:
- Realm (
KEYCLOAK_REALM
) - The name of the realm that you created in Red Hat Single Sign-On for your virtual database.
- Authentication server URL (
KEYCLOAK_AUTH_SERVER_URL
) - The base URL of the Red Hat Single Sign-On server. It is usually of the form https://host:port/auth.
- Resource name(
KEYCLOAK_RESOURCE
) - The name of the client that you create in Red Hat Single Sign-On for the data virtualization service.
- SSL requirement (
KEYCLOAK_SSL_REQUIRED
) - Specifies whether requests to the realm require SSL/TLS. You can require SSL/TLS for all requests, external requests only, or none.
- Access type (
KEYCLOAK_PUBLIC_CLIENT
) - The OAuth application type for the client. Public access type is for client-side clients that sign in from a browser.
Prerequisites
- You ran the Data Virtualization Operator to create a virtual database.
- Red Hat Single Sign-On is running in the cluster where the virtual database is deployed.
-
You have the CR YAML file, for example,
dv-customer.yaml
that you used to deploy the virtual database. - You have have administrator access to the Red Hat Single Sign-On Admin Console.
Procedure
- Log in to the Red Hat Single Sign-On Admin Console to find the values for the required authentication properties.
In a text editor, open the CR YAML file that you used to deploy your virtual database, and define authentication environment variables that are based on the values of your Red Hat Single Sign-On properties.
For example:
env: - name: KEYCLOAK_REALM value: master - name: KEYCLOAK_AUTH_SERVER_URL value: http://rh-sso-datavirt.openshift.example.com/auth - name: KEYCLOAK_RESOURCE value: datavirt - name: KEYCLOAK_SSL_REQUIRED value: external - name: KEYCLOAK_PUBLIC_CLIENT value: true
Declare a build source dependency for the following Maven artifact for securing data virtualizations:
org.teiid:spring-keycloak
For example:
env: .... build: source: dependencies: - org.teiid:spring-keycloak
- Save the CR.
You are now ready to define data roles in the DDL for the virtual database.
7.1.3. Defining data roles in the virtual database DDL
After you configure Red Hat Single Sign-On to integrate with data virtualization, to complete the required configuration changes, define role-based access policies in the DDL for the virtual database. Depending on how you deployed the virtual database, the DDL might be embedded in the CR file, or exist as a separate file.
You add the following information to the DDL file:
The name of the role. Roles that you define in the DDL must map to roles that you created earlier in Red Hat Single Sign-On.
TipFor the sake of clarity, match the role names in the DDL file to the role names that you specified in Red Hat Single Sign-On. Consistent naming makes it easier to correlate how the roles that you define in each location relate to each other.
- The database access to allow to users who are granted the specified role. For example, SELECT permissions on a particular table view.
Prerequisites
- You configured Red Hat Single Sign-On to work with data virtualization as described in Section 7.1.1, “Configuring Red Hat Single Sign-On to secure OData”.
- You added SSO properties to the CR file for the virtual database, as described in .
Procedure
- In a text editor, open the file that contains the DDL description that you used to deploy the virtual database.
Insert statements to add any roles that you defined for virtual database users in Red Hat Single Sign-On. For example, to add a role with the name
ReadRole
add the following statement to the DDL:CREATE ROLE ReadRole WITH FOREIGN ROLE ReadRole;
Add separate
CREATE ROLE
statements for each role that you want to implement for the virtual database.Insert statements that specify the level of access that users with the role have to database objects. For example,
GRANT SELECT ON TABLE "portfolio.CustomerZip" TO ReadRole
Add separate
GRANT
statements for each role that you want to implement for the virtual database.Save and close the CR or DDL file.
You are now ready to redeploy the virtual database. For information about how to run the Data Virtualization Operator to deploy the virtual database, see Chapter 6, Running the data virtualization operator to deploy a virtual database.
After you redeploy the virtual database, add a redirect URL in the Red Hat Single Sign-On Admin Console. For more information, see Section 7.1.4, “Adding a redirect URI for the data virtualization client in the Red Hat Single Sign-On Admin Console”.
7.1.4. Adding a redirect URI for the data virtualization client in the Red Hat Single Sign-On Admin Console
After you enable SSO for your virtual database and redeploy it, specify a redirect URI for the data virtualization client that you created in Section 7.1.1, “Configuring Red Hat Single Sign-On to secure OData”.
Redirect URIs, or callback URLs are required for public clients, such as OData clients that use OpenID Connect to authenticate, and communicate with an identity provider through the redirect mechanism.
For more information about adding redirect URIs for OIDC clients, see the NameOfRHSSOServerAdmin.
Prerequisites
- You enabled SSO for a virtual database and used the Data Virtualization Operator to redeploy it.
- You have administrator access to the Red Hat Single Sign-On Admin Console.
Procedure
- From a browser, sign in to the Red Hat Single Sign-On Admin Console.
-
From the security realm where you created the client for the data virtualization service, click Clients in the menu, and then click the ID of the data virtualization client that you created previously (for example,
dv-client
). -
In the Valid Redirect URIs field, type the root URL for the OData service and append an asterisk to it. For example,
http://datavirt.odata.example.com/*
Test whether Red Hat Single Sign-On intercepts calls to the OData API.
From a browser, type the address of an OData endpoint, for example:
http://datavirt.odata.example.com/odata/CustomerZip
A login page prompts you to provide credentials.
Sign in with the credentials of an authorized user.
Your view of the data depends on the role of the account that you use to sign in.
Some endpoints, such as odata/$metadata
are excluded from security filtering so that they can be discovered by other services.
7.2. Custom certificates for endpoint traffic encryption
Data Virtualization uses TLS certificates to encrypt network traffic between JDBC and ODBC database clients and a virtual database service. You can supply your own custom certificate, or use a service certificate that is generated by the OpenShift certificate authority. If you do not supply a custom TLS certificate, the Data Virtualization Operator generates a service certificate automatically.
Service certificates provide for encrypted communications for internal and external clients alike. However, only internal clients, that is, clients that are deployed in the same OpenShift cluster, can validate the authenticity of a service certificate.
OpenShift service certificates have the following characteristics:
-
Consist of a public key certificate (
tls.crt
) and a private key (tls.key
) in PEM base-64-encoded format. - Stored in an encryption secret in the OpenShift pod.
- Signed by the OpenShift CA.
- Valid for one year.
- Replaced automatically before expiration.
- Can be validated by internal clients only.
External clients do not recognize validity of certificates generated by the OpenShift certificate authority. To enable external clients to validate certificates, you must provide custom certificates from trusted, third-party certificate authorities (CAs). Such certificates are universally recognized, and can be verified by any client. To add a custom certificate to a virtual database, you supply information about the certificate in an encryption secret that you deploy to OpenShift before you run the Data Virtualization Operator to create the service.
When you deploy the encryption secret to OpenShift, it becomes available to the Data Virtualization Operator when it creates a virtual database. The Operator detects the secret with the name that matches the name of the virtual database in the CR, and it automatically configures the service to use the specified certificate to encrypt connections with database clients.
7.3. Using custom TLS certificates to encrypt communications between database clients and endpoints
You can add a custom TLS certificate to OpenShift to encrypt communications between JDBC or ODBC clients and a virtual database service. Because custom certificate are issued by trusted third-party certificate authorities (CA), clients can authenticate the CA signature on the certificate.
To configure an OpenShift pod to use a custom certificate to encrypt traffic, you add the certificate details to an OpenShift secret and deploy the secret to the namespace where you want to create the virtual database. You must create the secret before you create the service.
Prerequisites
- You have a TLS certificate from a trusted, third-party CA.
- You have Developer or Administrator access to the OpenShift project where you want to create the secret and virtual database.
Procedure
Create a YAML file to define a secret of type
kubernetes.io/tls
, and include the following information:- The public and private keys of the TLS key pair.
- The name of the virtual database that you want to create.
The OpenShift namespace in which you want to create the virtual database.
For example:
apiVersion: v1 kind: Secret type: kubernetes.io/tls metadata: name: dv-customer 1 namespace: myproject 2 data: 3 tls.crt: >- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- tls.key: >- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY-----
- 1
- The name of the secret. The secret name must match the name of the virtual database object in the CR YAML file that the Data Virtualization Operator uses to create a virtual database, for example,
dv-customer
. - 2
- The OpenShift namespace in which the virtual database service is deployed, for example,
myproject
. - 3
- The
data
value is made up of the contents of the TLS public key certificate (tls.crt
), and the private encryption key (tls.key
) in base64-encoded PEM format.
-
Save the file as
tls_secret.yaml
. Open a terminal window, sign in to the OpenShift project where you want to add the secret, and then type the following command:
$ oc apply -f tls_secret.yaml
After you deploy the TLS secret to OpenShift, run the Data Virtualization Operator to create a virtual database with the name that is specified in the secret.
When the Operator creates the virtual database, it matches the name in the secret to the name specified for the service in the CR. The Operator then configures the service to use the secret to encrypt client communications with the service.
7.4. Using secrets to store data source credentials
Create and deploy secret objects to store values for your environment variables.
Although secrets exist primarily to protect sensitive data by obscuring the value of a property, you can use them to store the value of any property.
Prerequisites
- You have the login credentials and other information that are required to access the data source.
Procedure
Create a secrets file to contain the credentials for your data source, and save it locally as a YAML file. For example,
Sample secrets.yml file
apiVersion: v1 kind: Secret metadata: name: postgresql type: Opaque stringData: database-user: bob database-name: sampledb database-password: bob_password
Deploy the secret object on OpenShift.
Log in to OpenShift, and open the project that you want to use for your virtual database. For example,
oc login --token=<token> --server=https://<server>
oc project <projectName>
Run the following command to deploy the secret file:
oc create -f ./secret.yaml
Set an environment variable to retrieve its value from the secret.
In the environment variable, use the format
valueFrom:/secretKeyRef
to specify that the variable retrieves it value from a key in the secret that you created in Step 1.For example, in the following excerpt, the
SPRING_DATASOURCE_SAMPLEDB_PASSWORD
retrieves its value from a reference to thedatabase-password
key of thepostgresql
secret:
- name: SPRING_DATASOURCE_SAMPLEDB_PASSWORD valueFrom: secretKeyRef: name: postgresql key: database-password
Additional resources
- For more information about how to use secrets on OpenShift, see Providing sensitive data to pods in the OpenShift documentation.