Chapter 3. Database Images
3.1. Overview
This topic group includes information on the different database images available for OpenShift users.
Enabling clustering for database images is currently in Technology Preview and not intended for production use.
3.2. MySQL
3.2.1. Overview
OpenShift provides a Docker image for running MySQL. This image can provide database services based on username, password, and database name settings provided via configuration.
3.2.2. Versions
Currently, OpenShift provides version 5.5 of MySQL.
3.2.3. Images
This image comes in two flavors, depending on your needs:
- RHEL 7
- CentOS 7
RHEL 7 Based Image
The RHEL 7 image is available through Red Hat’s subscription registry via:
$ docker pull registry.access.redhat.com/openshift3/mysql-55-rhel7
CentOS 7 Based Image
This image is available on DockerHub. To download it:
$ docker pull openshift/mysql-55-centos7
To use these images, you can either access them directly from these registries or push them into your OpenShift Docker registry. Additionally, you can create an ImageStream that points to the image, either in your Docker registry or at the external location. Your OpenShift resources can then reference the ImageStream. You can find example ImageStream definitions for all the provided OpenShift images.
3.2.4. Configuration and Usage
3.2.4.1. Initializing the Database
The first time you use the shared volume, the database is created along with the database administrator user and the MySQL root user (if you specify the MYSQL_ROOT_PASSWORD
environment variable). Afterwards, the MySQL daemon starts up. If you are re-attaching the volume to another container, then the database, database user, and the administrator user are not created, and the MySQL daemon starts.
The following command creates a new database pod with MySQL running in a container:
$ oc new-app -e \ MYSQL_USER=<username>,MYSQL_PASSWORD=<password>,MYSQL_DATABASE=<database_name> \ registry.access.redhat.com/openshift3/mysql-55-rhel7
3.2.4.2. Running MySQL Commands in Containers
OpenShift uses Software Collections (SCLs) to install and launch MySQL. If you want to execute a MySQL command inside of a running container (for debugging), you must invoke it using bash.
To do so, first identify the name of the pod. For example, you can view the list of pods in your current project:
$ oc get pods
Then, open a remote shell session to the pod:
$ oc rsh <pod>
When you enter the container, the required SCL is automatically enabled.
You can now run the mysql command from the bash shell to start a MySQL interactive session and perform normal MySQL operations. For example, to authenticate as the database user:
bash-4.2$ mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -h $HOSTNAME $MYSQL_DATABASE Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.5.37 MySQL Community Server (GPL) ... mysql>
When you are finished, enter quit or exit to leave the MySQL session.
3.2.4.3. Environment Variables
The MySQL user name, password, and database name must be configured with the following environment variables:
Variable Name | Description |
---|---|
| Specifies the user name for the database user that is created for use by your application. |
|
Password for the |
|
Name of the database to which |
| Optional password for the root user. If this is not set, then remote login to the root account is not possible. Local connections from within the container are always permitted without a password. |
You must specify the user name, password, and database name. If you do not specify all three, the pod will fail to start and OpenShift will continuously try to restart it.
MySQL settings can be configured with the following environment variables:
Variable Name | Description | Default |
---|---|---|
| Sets how the table names are stored and compared. | 0 |
| The maximum permitted number of simultaneous client connections. | 151 |
| The minimum length of the word to be included in a FULLTEXT index. | 4 |
| The maximum length of the word to be included in a FULLTEXT index. | 20 |
| Controls the innodb_use_native_aio setting value if the native AIO is broken. | 1 |
3.2.4.4. Volume Mount Points
The MySQL image can be run with mounted volumes to enable persistent storage for the database:
- /var/lib/mysql/data - This is the data directory where MySQL stores database files.
3.2.4.5. Changing Passwords
Passwords are part of the image configuration, therefore the only supported method to change passwords for the database user (MYSQL_USER
) and root user is by changing the environment variables MYSQL_PASSWORD
and MYSQL_ROOT_PASSWORD
, respectively.
You can view the current passwords by viewing the pod or deployment configuration in the web console or by listing the environment variables with the CLI:
$ oc env pod <pod_name> --list
Whenever MYSQL_ROOT_PASSWORD
is set, it enables remote access for the root user with the given password, and whenever it is unset, remote access for the root user is disabled. This does not affect the regular user MYSQL_USER
, who always has remote access. This also does not affect local access by the root user, who can always log in without a password in localhost.
Changing database passwords through SQL statements or any way other than through the environment variables aforementioned causes a mismatch between the values stored in the variables and the actual passwords. Whenever a database container starts, it resets the passwords to the values stored in the environment variables.
To change these passwords, update one or both of the desired environment variables for the related deployment configuration(s) using the oc env
command. If multiple deployment configurations utilize these environment variables, for example in the case of an application created from a template, you must update the variables on each deployment configuration so that the passwords are in sync everywhere. This can be done all in the same command:
$ oc env dc <dc_name> [<dc_name_2> ...] \ MYSQL_PASSWORD=<new_password> \ MYSQL_ROOT_PASSWORD=<new_root_password>
Depending on your application, there may be other environment variables for passwords in other parts of the application that should also be updated to match. For example, there could be a more generic DATABASE_USER
variable in a front-end pod that should match the database user’s password. Ensure that passwords are in sync for all required environment variables per your application, otherwise your pods may fail to redeploy when triggered.
Updating the environment variables triggers the redeployment of the database server if you have a configuration change trigger. Otherwise, you must manually start a new deployment in order to apply the password changes.
To verify that new passwords are in effect, first open a remote shell session to the running MySQL pod:
$ oc rsh <pod>
From the bash shell, verify the database user’s new password:
bash-4.2$ mysql -u $MYSQL_USER -p<new_password> -h $HOSTNAME $MYSQL_DATABASE -te "SELECT * FROM (SELECT database()) db CROSS JOIN (SELECT user()) u"
If the password was changed correctly, you should see a table like this:
+------------+---------------------+ | database() | user() | +------------+---------------------+ | sampledb | user0PG@172.17.42.1 | +------------+---------------------+
To verify the root user’s new password:
bash-4.2$ mysql -u root -p<new_root_password> -h $HOSTNAME $MYSQL_DATABASE -te "SELECT * FROM (SELECT database()) db CROSS JOIN (SELECT user()) u"
If the password was changed correctly, you should see a table like this:
+------------+------------------+ | database() | user() | +------------+------------------+ | sampledb | root@172.17.42.1 | +------------+------------------+
3.2.5. Creating a Database Service from a Template
OpenShift provides a template to make creating a new database service easy. The template provides parameter fields to define all the mandatory environment variables (user, password, database name, etc) with predefined defaults including auto-generation of password values. It will also define both a deployment configuration and a service.
The MySQL templates should have been registered in the default openshift project by your cluster administrator during the First Steps setup process. There are two templates available:
-
mysql-ephemeral
is for development or testing purposes only because it uses ephemeral storage for the database content. This means that if the database pod is restarted for any reason, such as the pod being moved to another node or the deployment configuration being updated and triggering a redeploy, all data will be lost. -
mysql-persistent
uses a persistent volume store for the database data which means the data will survive a pod restart. Using persistent volumes requires a persistent volume pool be defined in the OpenShift deployment. Cluster administrator instructions for setting up the pool are located here.
You can find instructions for instantiating templates by following these instructions.
Once you have instantiated the service, you can copy the user name, password, and database name environment variables into a deployment configuration for another component that intends to access the database. That component can then access the database via the service that was defined.
3.2.6. Using MySQL Replication
Enabling clustering for database images is currently in Technology Preview and not intended for production use.
Red Hat provides a proof-of-concept template for MySQL master-slave replication (clustering); you can obtain the example template from GitHub.
To upload the example template into the current project’s template library:
$ oc create -f \ https://raw.githubusercontent.com/openshift/mysql/master/5.5/examples/replica/mysql_replica.json
The following sections detail the objects defined in the example template and describe how they work together to start a cluster of MySQL servers implementing master-slave replication. This is the recommended replication strategy for MySQL.
3.2.6.1. Creating the Deployment Configuration for the MySQL Master
To set up MySQL replication, a deployment configuration is defined in the example template that defines a replication controller. For MySQL master-slave replication, two deployment configurations are needed. One deployment configuration defines the MySQL master server and second the MySQL slave servers.
To tell a MySQL server to act as the master, the command
field in the container’s definition in the deployment configuration must be set to run-mysqld-master. This script acts as an alternative entrypoint for the MySQL image and configures the MySQL server to run as the master in replication.
MySQL replication requires a special user that relays data between the master and slaves. The following environment variables are defined in the template for this purpose:
Variable Name | Description | Default |
---|---|---|
| The user name of the replication user | master |
| The password for the replication user | generated |
Example 3.1. MySQL Master Deployment Configuration Object Definition in the Example Template
{ "kind":"DeploymentConfig", "apiVersion":"v1", "metadata":{ "name":"mysql-master" }, "spec":{ "strategy":{ "type":"Recreate" }, "triggers":[ { "type":"ConfigChange" } ], "replicas":1, "selector":{ "name":"mysql-master" }, "template":{ "metadata":{ "labels":{ "name":"mysql-master" } }, "spec":{ "volumes":[ { "name":"mysql-master-data", "persistentVolumeClaim":{ "claimName":"mysql-master" } } ], "containers":[ { "name":"server", "image":"openshift/mysql-55-centos7", "command":[ "run-mysqld-master" ], "ports":[ { "containerPort":3306, "protocol":"TCP" } ], "env":[ { "name":"MYSQL_MASTER_USER", "value":"${MYSQL_MASTER_USER}" }, { "name":"MYSQL_MASTER_PASSWORD", "value":"${MYSQL_MASTER_PASSWORD}" }, { "name":"MYSQL_USER", "value":"${MYSQL_USER}" }, { "name":"MYSQL_PASSWORD", "value":"${MYSQL_PASSWORD}" }, { "name":"MYSQL_DATABASE", "value":"${MYSQL_DATABASE}" }, { "name":"MYSQL_ROOT_PASSWORD", "value":"${MYSQL_ROOT_PASSWORD}" } ], "volumeMounts":[ { "name":"mysql-master-data", "mountPath":"/var/lib/mysql/data" } ], "resources":{ }, "terminationMessagePath":"/dev/termination-log", "imagePullPolicy":"IfNotPresent", "securityContext":{ "capabilities":{ }, "privileged":false } } ], "restartPolicy":"Always", "dnsPolicy":"ClusterFirst" } } } }
Since we claimed a persistent volume in this deployment configuration to have all data persisted for the MySQL master server, you must ask your cluster administrator to create a persistent volume that you can claim the storage from.
After the deployment configuration is created and the pod with MySQL master server is started, it will create the database defined by MYSQL_DATABASE
and configure the server to replicate this database to slaves.
The example provided defines only one replica of the MySQL master server. This causes OpenShift to start only one instance of the server. Multiple instances (multi-master) is not supported and therefore you can not scale this replication controller.
To replicate the database created by the MySQL master, a deployment configuration is defined in the template. This deployment configuration creates a replication controller that launches the MySQL image with the command
field set to run-mysqld-slave. This alternative entrypoints skips the initialization of the database and configures the MySQL server to connect to the mysql-master service, which is also defined in example template.
Example 3.2. MySQL Slave Deployment Configuration Object Definition in the Example Template
{ "kind":"DeploymentConfig", "apiVersion":"v1", "metadata":{ "name":"mysql-slave" }, "spec":{ "strategy":{ "type":"Recreate" }, "triggers":[ { "type":"ConfigChange" } ], "replicas":1, "selector":{ "name":"mysql-slave" }, "template":{ "metadata":{ "labels":{ "name":"mysql-slave" } }, "spec":{ "containers":[ { "name":"server", "image":"openshift/mysql-55-centos7", "command":[ "run-mysqld-slave" ], "ports":[ { "containerPort":3306, "protocol":"TCP" } ], "env":[ { "name":"MYSQL_MASTER_USER", "value":"${MYSQL_MASTER_USER}" }, { "name":"MYSQL_MASTER_PASSWORD", "value":"${MYSQL_MASTER_PASSWORD}" }, { "name":"MYSQL_DATABASE", "value":"${MYSQL_DATABASE}" } ], "resources":{ }, "terminationMessagePath":"/dev/termination-log", "imagePullPolicy":"IfNotPresent", "securityContext":{ "capabilities":{ }, "privileged":false } } ], "restartPolicy":"Always", "dnsPolicy":"ClusterFirst" } } } }
This example deployment configuration starts the replication controller with the initial number of replicas set to 1. You can scale this replication controller in both directions, up to the resources capacity of your account.
3.2.6.2. Creating a Headless Service
The pods created by the MySQL slave replication controller must reach the MySQL master server in order to register for replication. The example template defines a headless service named mysql-master for this purpose. This service is not used only for replication, but the clients can also send the queries to mysql-master:3306 as the MySQL host.
To have a headless service, the portalIP
parameter in the service definition is set to None. Then you can use a DNS query to get a list of the pod IP addresses that represents the current endpoints for this service.
Example 3.3. Headless Service Object Definition in the Example Template
{ "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"mysql-master", "labels":{ "name":"mysql-master" } }, "spec":{ "ports":[ { "protocol":"TCP", "port":3306, "targetPort":3306, "nodePort":0 } ], "selector":{ "name":"mysql-master" }, "portalIP":"None", "type":"ClusterIP", "sessionAffinity":"None" }, "status":{ "loadBalancer":{ } } }
3.2.6.3. Scaling the MySQL Slaves
To increase the number of members in the cluster:
$ oc scale rc mysql-slave-1 --replicas=<number>
This tells the replication controller to create a new MySQL slave pod. When a new slave is created, the slave entrypoint first attempts to contact the mysql-master service and register itself to the replication set. Once that is done, the MySQL master server sends the slave the replicated database.
When scaling down, the MySQL slave is shut down and, because the slave does not have any persistent storage defined, all data on the slave is lost. The MySQL master server then discovers that the slave is not reachable anymore, and it automatically removes it from the replication.
3.3. PostgreSQL
3.3.1. Overview
OpenShift provides a Docker image for running PostgreSQL. This image can provide database services based on username, password, and database name settings provided via configuration.
3.3.2. Versions
Currently, OpenShift supports version 9.2 of PostgreSQL.
3.3.3. Images
This image comes in two flavors, depending on your needs:
- RHEL 7
- CentOS 7
RHEL 7 Based Image
The RHEL 7 image is available through Red Hat’s subscription registry via:
$ docker pull registry.access.redhat.com/openshift3/postgresql-92-rhel7
CentOS 7 Based Image
This image is available on DockerHub. To download it:
$ docker pull openshift/postgresql-92-centos7
To use these images, you can either access them directly from these registries or push them into your OpenShift Docker registry. Additionally, you can create an ImageStream that points to the image, either in your Docker registry or at the external location. Your OpenShift resources can then reference the ImageStream. You can find example ImageStream definitions for all the provided OpenShift images.
3.3.4. Configuration and Usage
3.3.4.1. Initializing the Database
The first time you use the shared volume, the database is created along with the database administrator user and the PostgreSQL postgres user (if you specify the POSTGRESQL_ADMIN_PASSWORD
environment variable). Afterwards, the PostgreSQL daemon starts up. If you are re-attaching the volume to another container, then the database, the database user, and the administrator user are not created, and the PostgreSQL daemon starts.
The following command creates a new database pod with PostgreSQL running in a container:
$ oc new-app -e \ POSTGRESQL_USER=<username>,POSTGRESQL_PASSWORD=<password>,POSTGRESQL_DATABASE=<database_name> \ registry.access.redhat.com/rhscl/postgresql-94-rhel7
3.3.4.2. Running PostgreSQL Commands in Containers
OpenShift uses Software Collections (SCLs) to install and launch PostgreSQL. If you want to execute a PostgreSQL command inside of a running container (for debugging), you must invoke it using bash.
To do so, first identify the name of the running PostgreSQL pod. For example, you can view the list of pods in your current project:
$ oc get pods
Then, open a remote shell session to the desired pod:
$ oc rsh <pod>
When you enter the container, the required SCL is automatically enabled.
You can now run the psql command from the bash shell to start a PostgreSQL interactive session and perform normal PostgreSQL operations. For example, to authenticate as the database user:
bash-4.2$ PGPASSWORD=$POSTGRESQL_PASSWORD psql -h postgresql $POSTGRESQL_DATABASE $POSTGRESQL_USER psql (9.2.8) Type "help" for help. default=>
When you are finished, enter \q to leave the PostgreSQL session.
3.3.4.3. Environment Variables
The PostgreSQL user name, password, and database name must be configured with the following environment variables:
Variable Name | Description |
---|---|
| User name for the PostgreSQL account to be created. This user has full rights to the database. |
| Password for the user account. |
| Database name. |
| Optional password for the postgres administrator user. If this is not set, then remote login to the postgres account is not possible. Local connections from within the container are always permitted without a password. |
You must specify the user name, password, and database name. If you do not specify all three, the pod will fail to start and OpenShift will continuously try to restart it.
PostgreSQL settings can be configured with the following environment variables:
Variable Name | Description | Default |
---|---|---|
| The maximum number of client connections allowed. This also sets the maximum number of prepared transactions. | 100 |
| Configures how much memory is dedicated to PostgreSQL for caching data. | 32M |
3.3.4.4. Volume Mount Points
The PostgreSQL image can be run with mounted volumes to enable persistent storage for the database:
- /var/lib/pgsql/data - This is the database cluster directory where PostgreSQL stores database files.
3.3.4.5. Changing Passwords
Passwords are part of the image configuration, therefore the only supported method to change passwords for the database user (POSTGRESQL_USER
) and postgres administrator user is by changing the environment variables POSTGRESQL_PASSWORD
and POSTGRESQL_ADMIN_PASSWORD
, respectively.
You can view the current passwords by viewing the pod or deployment configuration in the web console or by listing the environment variables with the CLI:
$ oc env pod <pod_name> --list
Changing database passwords through SQL statements or any way other than through the environment variables aforementioned will cause a mismatch between the values stored in the variables and the actual passwords. Whenever a database container starts, it resets the passwords to the values stored in the environment variables.
To change these passwords, update one or both of the desired environment variables for the related deployment configuration(s) using the oc env
command. If multiple deployment configurations utilize these environment variables, for example in the case of an application created from a template, you must update the variables on each deployment configuration so that the passwords are in sync everywhere. This can be done all in the same command:
$ oc env dc <dc_name> [<dc_name_2> ...] \ POSTGRESQL_PASSWORD=<new_password> \ POSTGRESQL_ADMIN_PASSWORD=<new_admin_password>
Depending on your application, there may be other environment variables for passwords in other parts of the application that should also be updated to match. For example, there could be a more generic DATABASE_USER
variable in a front-end pod that should match the database user’s password. Ensure that passwords are in sync for all required environment variables per your application, otherwise your pods may fail to redeploy when triggered.
Updating the environment variables triggers the redeployment of the database server if you have a configuration change trigger. Otherwise, you must manually start a new deployment in order to apply the password changes.
To verify that new passwords are in effect, first open a remote shell session to the running PostgreSQL pod:
$ oc rsh <pod>
From the bash shell, verify the database user’s new password:
bash-4.2$ PGPASSWORD=<new_password> psql -h postgresql $POSTGRESQL_DATABASE $POSTGRESQL_USER -c "SELECT * FROM (SELECT current_database()) cdb CROSS JOIN (SELECT current_user) cu"
If the password was changed correctly, you should see a table like this:
current_database | current_user ------------------+-------------- default | django (1 row)
From the bash shell, verify the postgres administrator user’s new password:
bash-4.2$ PGPASSWORD=<new_admin_password> psql -h postgresql $POSTGRESQL_DATABASE postgres -c "SELECT * FROM (SELECT current_database()) cdb CROSS JOIN (SELECT current_user) cu"
If the password was changed correctly, you should see a table like this:
current_database | current_user ------------------+-------------- default | postgres (1 row)
3.3.5. Creating a Database Service from a Template
OpenShift provides a template to make creating a new database service easy. The template provides parameter fields to define all the mandatory environment variables (user, password, database name, etc) with predefined defaults including auto-generation of password values. It will also define both a deployment configuration and a service.
The PostgreSQL templates should have been registered in the default openshift project by your cluster administrator during the First Steps setup process. There are two templates available:
-
PostgreSQL-ephemeral
is for development or testing purposes only because it uses ephemeral storage for the database content. This means that if the database pod is restarted for any reason, such as the pod being moved to another node or the deployment configuration being updated and triggering a redeploy, all data will be lost. -
PostgreSQL-persistent
uses a persistent volume store for the database data which means the data will survive a pod restart. Using persistent volumes requires a persistent volume pool be defined in the OpenShift deployment. Cluster administrator instructions for setting up the pool are located here.
You can find instructions for instantiating templates by following these instructions.
Once you have instantiated the service, you can copy the user name, password, and database name environment variables into a deployment configuration for another component that intends to access the database. That component can then access the database via the service that was defined.
3.4. MongoDB
3.4.1. Overview
OpenShift provides a Docker image for running MongoDB. This image can provide database services based on username, password, and database name settings provided via configuration.
3.4.2. Versions
Currently, OpenShift provides version 2.4 of MongoDB.
3.4.3. Images
This image comes in two flavors, depending on your needs:
- RHEL 7
- CentOS 7
RHEL 7 Based Image
The RHEL 7 image is available through Red Hat’s subscription registry via:
$ docker pull registry.access.redhat.com/openshift3/mongodb-24-rhel7
CentOS 7 Based Image
This image is available on DockerHub. To download it:
$ docker pull openshift/mongodb-24-centos7
To use these images, you can either access them directly from these registries or push them into your OpenShift Docker registry. Additionally, you can create an ImageStream that points to the image, either in your Docker registry or at the external location. Your OpenShift resources can then reference the ImageStream. You can find example ImageStream definitions for all the provided OpenShift images.
3.4.4. Configuration and Usage
3.4.4.1. Initializing the Database
The first time you use the shared volume, the database is created along with the database administrator user. Afterwards, the MongoDB daemon starts up. If you are re-attaching the volume to another container, then the database, database user, and the administrator user are not created, and the MongoDB daemon starts.
The following command creates a new database pod with MongoDB running in a container:
$ oc new-app -e \ MONGODB_USER=<username>,MONGODB_PASSWORD=<password>,MONGODB_DATABASE=<database_name>,MONGODB_ADMIN_PASSWORD=<admin_password> \ registry.access.redhat.com/rhscl/mongodb-26-rhel7
3.4.4.2. Running MongoDB Commands in Containers
OpenShift uses Software Collections (SCLs) to install and launch MongoDB. If you want to execute a MongoDB command inside of a running container (for debugging), you must invoke it using bash.
To do so, first identify the name of the running MongoDB pod. For example, you can view the list of pods in your current project:
$ oc get pods
Then, open a remote shell session to the desired pod:
$ oc rsh <pod>
When you enter the container, the required SCL is automatically enabled.
You can now run mongo commands from the bash shell to start a MongoDB interactive session and perform normal MongoDB operations. For example, to switch to the sampledb database and authenticate as the database user:
bash-4.2$ mongo -u $MONGODB_USER -p $MONGODB_PASSWORD $MONGODB_DATABASE MongoDB shell version: 2.4.9 connecting to: sampledb >
When you are finished, press CTRL+D to leave the MongoDB session.
3.4.4.3. Environment Variables
The MongoDB user name, password, database name, and admin password must be configured with the following environment variables:
Variable Name | Description |
---|---|
| User name for MongoDB account to be created. |
| Password for the user account. |
| Database name. |
| Password for the admin user. |
You must specify the user name, password, database name, and admin password. If you do not specify all four, the pod will fail to start and OpenShift will continuously try to restart it.
The administrator user name is set to admin and you must specify its password by setting the MONGODB_ADMIN_PASSWORD
environment variable. This process is done upon database initialization.
MongoDB settings can be configured with the following environment variables:
Variable Name | Description | Default |
---|---|---|
| Disable data file preallocation. | true |
| Set MongoDB to use a smaller default data file size. | true |
| Runs MongoDB in a quiet mode that attempts to limit the amount of output. | true |
3.4.4.4. Volume Mount Points
The MongoDB image can be run with mounted volumes to enable persistent storage for the database:
- /var/lib/mongodb - This is the database directory where MongoDB stores database files.
3.4.4.5. Changing Passwords
Passwords are part of the image configuration, therefore the only supported method to change passwords for the database user (MONGODB_USER
) and admin user is by changing the environment variables MONGODB_PASSWORD
and MONGODB_ADMIN_PASSWORD
, respectively.
You can view the current passwords by viewing the pod or deployment configuration in the web console or by listing the environment variables with the CLI:
$ oc env pod <pod_name> --list
Changing database passwords directly in MongoDB causes a mismatch between the values stored in the variables and the actual passwords. Whenever a database container starts, it resets the passwords to the values stored in the environment variables.
To change these passwords, update one or both of the desired environment variables for the related deployment configuration(s) using the oc env
command. If multiple deployment configurations utilize these environment variables, for example in the case of an application created from a template, you must update the variables on each deployment configuration so that the passwords are in sync everywhere. This can be done all in the same command:
$ oc env dc <dc_name> [<dc_name_2> ...] \ MONGODB_PASSWORD=<new_password> \ MONGODB_ADMIN_PASSWORD=<new_admin_password>
Depending on your application, there may be other environment variables for passwords in other parts of the application that should also be updated to match. For example, there could be a more generic DATABASE_USER
variable in a front-end pod that should match the database user’s password. Ensure that passwords are in sync for all required environment variables per your application, otherwise your pods may fail to redeploy when triggered.
Updating the environment variables triggers the redeployment of the database server if you have a configuration change trigger. Otherwise, you must manually start a new deployment in order to apply the password changes.
To verify that new passwords are in effect, first open a remote shell session to the running MongoDB pod:
$ oc rsh <pod>
From the bash shell, verify the database user’s new password:
bash-4.2$ mongo -u $MONGODB_USER -p <new_password> $MONGODB_DATABASE --eval "db.version()"
If the password was changed correctly, you should see output like this:
MongoDB shell version: 2.4.9 connecting to: sampledb 2.4.9
To verify the admin user’s new password:
bash-4.2$ mongo -u admin -p <new_admin_password> admin --eval "db.version()"
If the password was changed correctly, you should see output like this:
MongoDB shell version: 2.4.9 connecting to: admin 2.4.9
3.4.5. Creating a Database Service from a Template
OpenShift provides a template to make creating a new database service easy. The template provides parameter fields to define all the mandatory environment variables (user, password, database name, etc) with predefined defaults including auto-generation of password values. It will also define both a deployment configuration and a service.
The MongoDB templates should have been registered in the default openshift project by your cluster administrator during the First Steps setup process. There are two templates available:
-
mongodb-ephemeral
is for development/testing purposes only because it uses ephemeral storage for the database content. This means that if the database pod is restarted for any reason, such as the pod being moved to another node or the deployment configuration being updated and triggering a redeploy, all data will be lost. -
mongodb-persistent
uses a persistent volume store for the database data which means the data will survive a pod restart. Using persistent volumes requires a persistent volume pool be defined in the OpenShift deployment. Cluster administrator instructions for setting up the pool are located here.
You can find instructions for instantiating templates by following these instructions.
Once you have instantiated the service, you can copy the user name, password, and database name environment variables into a deployment configuration for another component that intends to access the database. That component can then access the database via the service that was defined.
3.4.6. Using MongoDB Replication
Enabling clustering for database images is currently in Technology Preview and not intended for production use.
Red Hat provides a proof-of-concept template for MongoDB replication (clustering); you can obtain the example template from GitHub.
For example, to upload the example template into the current project’s template library:
$ oc create -f \ https://raw.githubusercontent.com/openshift/mongodb/master/2.4/examples/replica/mongodb-clustered.json
The example template does not use persistent storage. When you lose all members of the replication set, your data will be lost.
The following sections detail the objects defined in the example template and describe how they work together to start a cluster of MongoDB servers implementing master-slave replication and automated failover. This is the recommended replication strategy for MongoDB.
3.4.6.1. Creating the Deployment Configuration
To set up MongoDB replication, a deployment configuration is defined in the example template that defines a replication controller. The replication controller manages the members of the MongoDB cluster.
To tell a MongoDB server that the member will be part of the cluster, additional environment variables are provided for the container defined in the replication controller pod template:
Variable Name | Description | Default |
---|---|---|
| Specifies the name of the replication set. | rs0 |
| See: Generate a Key File | generated |
Example 3.4. Deployment Configuration Object Definition in the Example Template
{ "kind": "DeploymentConfig", "apiVersion": "v1", "metadata": { "name": "${MONGODB_SERVICE_NAME}", }, "spec": { "strategy": { "type": "Recreate", "resources": {} }, "triggers": [ { "type":"ConfigChange" } ], "replicas": 3, "selector": { "name": "mongodb-replica" }, "template": { "metadata": { "labels": { "name": "mongodb-replica" } }, "spec": { "containers": [ { "name": "member", "image": "openshift/mongodb-24-centos7", "env": [ { "name": "MONGODB_USER", "value": "${MONGODB_USER}" }, { "name": "MONGODB_PASSWORD", "value": "${MONGODB_PASSWORD}" }, { "name": "MONGODB_DATABASE", "value": "${MONGODB_DATABASE}" }, { "name": "MONGODB_ADMIN_PASSWORD", "value": "${MONGODB_ADMIN_PASSWORD}" }, { "name": "MONGODB_REPLICA_NAME", "value": "${MONGODB_REPLICA_NAME}" }, { "name": "MONGODB_SERVICE_NAME", "value": "${MONGODB_SERVICE_NAME}" }, { "name": "MONGODB_KEYFILE_VALUE", "value": "${MONGODB_KEYFILE_VALUE}" } ], "ports":[ { "containerPort": 27017, "protocol": "TCP" } ] } ] } }, "restartPolicy": "Never", "dnsPolicy": "ClusterFirst" } }
After the deployment configuration is created and the pods with MongoDB cluster members are started, they will not be initialized. Instead, they start as part of the rs0 replication set, as the value of MONGODB_REPLICA_NAME
is set to rs0 by default.
3.4.6.2. Creating the Service Pod
To initialize members created by the deployment configuration, a service pod is defined in the template. This pod starts MongoDB with the initiate
argument, which instructs the container entrypoint to behave slightly differently than a regular, stand-alone MongoDB database.
Example 3.5. Service Pod Object Definition in the Example Template
{ "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "mongodb-service", "creationTimestamp": null, "labels": { "name": "mongodb-service" } }, "spec": { "restartPolicy": "Never", "dnsPolicy": "ClusterFirst", "containers": [ { "name": "initiate", "image": "openshift/mongodb-24-centos7", "args": ["initiate"], "env": [ { "name": "MONGODB_USER", "value": "${MONGODB_USER}" }, { "name": "MONGODB_PASSWORD", "value": "${MONGODB_PASSWORD}" }, { "name": "MONGODB_DATABASE", "value": "${MONGODB_DATABASE}" }, { "name": "MONGODB_ADMIN_PASSWORD", "value": "${MONGODB_ADMIN_PASSWORD}" }, { "name": "MONGODB_REPLICA_NAME", "value": "${MONGODB_REPLICA_NAME}" }, { "name": "MONGODB_SERVICE_NAME", "value": "${MONGODB_SERVICE_NAME}" }, { "name": "MONGODB_KEYFILE_VALUE", "value": "${MONGODB_KEYFILE_VALUE}" } ] } ] } }
3.4.6.3. Creating a Headless Service
The initiate
argument in the container specification above instructs the container to first discover all running member pods within the MongoDB cluster. To achieve this, a headless service is defined named mongodb in the example template.
To have a headless service, the portalIP
parameter in the service definition is set to None. Then you can use a DNS query to get a list of the pod IP addresses that represents the current endpoints for this service.
Example 3.6. Headless Service Object Definition in the Example Template
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "${MONGODB_SERVICE_NAME}", "labels": { "name": "${MONGODB_SERVICE_NAME}" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 27017, "targetPort": 27017, "nodePort": 0 } ], "selector": { "name": "mongodb-replica" }, "portalIP": "None", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } }
3.4.6.4. Creating the Final Replication Set
When the script that runs as the container entrypoint has the IP addresses of all running MongoDB members, it creates a MongoDB replication set configuration where it lists all member IP addresses. It then initiates the replication set using rs.initiate(config)
. The script waits until MongoDB elects the PRIMARY member of the cluster.
Once the PRIMARY member has been elected, the entrypoint script starts creating MongoDB users and databases. The service pod runs MongoDB without the --auth
argument, so it can bootstrap the PRIMARY member without providing any authentication.
When the user accounts and databases are created and the data are replicated to other members, the service pod then gives up its PRIMARY role and shuts down.
It is important that the restartPolicy
field in the service pod is set to Never to prevent the service pod from restarting when the container exits.
As soon as the service pod shuts down, other members start a new election and the new PRIMARY member is elected from the running members.
Clients can then start using the MongoDB instance by sending the queries to the mongodb service. As this service is a headless service, they do not need to provide the IP address. Clients can use mongodb:27017 for connections. The service then sends the query to one of the members in the replication set.
3.4.6.5. Scaling the MongoDB Replication Set
To increase the number of members in the cluster:
$ oc scale rc mongodb-1 --replicas=<number>
This tells the replication controller to create a new MongoDB member pod. When a new member is created, the member entrypoint first attempts to discover other running members in the cluster. It then chooses one and adds itself to the list of members. Once the replication configuration is updated, the other members replicate the data to a new pod and start a new election.