Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Creating applications


3.1. Using templates

The following sections provide an overview of templates, as well as how to use and create them.

3.1.1. Understanding templates

A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template.

You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console.

3.1.2. Uploading a template

If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic.

Procedure

  • Upload a template using one of the following methods:

    • Upload a template to your current project’s template library, pass the JSON or YAML file with the following command:

      $ oc create -f <filename>
    • Upload a template to a different project using the -n option with the name of the project:

      $ oc create -f <filename> -n <project>

The template is now available for selection using the web console or the CLI.

3.1.3. Creating an application by using the web console

You can use the web console to create an application from a template.

Procedure

  1. Select Developer from the context selector at the top of the web console navigation menu.
  2. While in the desired project, click +Add
  3. Click All services in the Developer Catalog tile.
  4. Click Builder Images under Type to see the available builder images.

    Note

    Only image stream tags that have the builder tag listed in their annotations appear in this list, as demonstrated here:

    kind: "ImageStream"
    apiVersion: "image.openshift.io/v1"
    metadata:
      name: "ruby"
      creationTimestamp: null
    spec:
    # ...
      tags:
        - name: "2.6"
          annotations:
            description: "Build and run Ruby 2.6 applications"
            iconClass: "icon-ruby"
            tags: "builder,ruby" 1
            supports: "ruby:2.6,ruby"
            version: "2.6"
    # ...
    1
    Including builder here ensures this image stream tag appears in the web console as a builder.
  5. Modify the settings in the new application screen to configure the objects to support your application.

3.1.4. Creating objects from templates by using the CLI

You can use the CLI to process templates and use the configuration that is generated to create objects.

3.1.4.1. Adding labels

Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template.

Procedure

  • Add labels in the template from the command line:

    $ oc process -f <filename> -l name=otherLabel

3.1.4.2. Listing parameters

The list of parameters that you can override are listed in the parameters section of the template.

Procedure

  1. You can list parameters with the CLI by using the following command and specifying the file to be used:

    $ oc process --parameters -f <filename>

    Alternatively, if the template is already uploaded:

    $ oc process --parameters -n <project> <template_name>

    For example, the following shows the output when listing the parameters for one of the quick start templates in the default openshift project:

    $ oc process --parameters -n openshift rails-postgresql-example

    Example output

    NAME                         DESCRIPTION                                                                                              GENERATOR           VALUE
    SOURCE_REPOSITORY_URL        The URL of the repository with your application source code                                                                  https://github.com/sclorg/rails-ex.git
    SOURCE_REPOSITORY_REF        Set this to a branch name, tag or other ref of your repository if you are not using the default branch
    CONTEXT_DIR                  Set this to the relative path to your project if it is not in the root of your repository
    APPLICATION_DOMAIN           The exposed hostname that will route to the Rails service                                                                    rails-postgresql-example.openshiftapps.com
    GITHUB_WEBHOOK_SECRET        A secret string used to configure the GitHub webhook                                                     expression          [a-zA-Z0-9]{40}
    SECRET_KEY_BASE              Your secret key for verifying the integrity of signed cookies                                            expression          [a-z0-9]{127}
    APPLICATION_USER             The application user that is used within the sample application to authorize access on pages                                 openshift
    APPLICATION_PASSWORD         The application password that is used within the sample application to authorize access on pages                             secret
    DATABASE_SERVICE_NAME        Database service name                                                                                                        postgresql
    POSTGRESQL_USER              database username                                                                                        expression          user[A-Z0-9]{3}
    POSTGRESQL_PASSWORD          database password                                                                                        expression          [a-zA-Z0-9]{8}
    POSTGRESQL_DATABASE          database name                                                                                                                root
    POSTGRESQL_MAX_CONNECTIONS   database max connections                                                                                                     10
    POSTGRESQL_SHARED_BUFFERS    database shared buffers                                                                                                      12MB

    The output identifies several parameters that are generated with a regular expression-like generator when the template is processed.

3.1.4.3. Generating a list of objects

Using the CLI, you can process a file defining a template to return the list of objects to standard output.

Procedure

  1. Process a file defining a template to return the list of objects to standard output:

    $ oc process -f <filename>

    Alternatively, if the template has already been uploaded to the current project:

    $ oc process <template_name>
  2. Create objects from a template by processing the template and piping the output to oc create:

    $ oc process -f <filename> | oc create -f -

    Alternatively, if the template has already been uploaded to the current project:

    $ oc process <template> | oc create -f -
  3. You can override any parameter values defined in the file by adding the -p option for each <name>=<value> pair you want to override. A parameter reference appears in any text field inside the template items.

    For example, in the following the POSTGRESQL_USER and POSTGRESQL_DATABASE parameters of a template are overridden to output a configuration with customized environment variables:

    1. Creating a List of objects from a template

      $ oc process -f my-rails-postgresql \
          -p POSTGRESQL_USER=bob \
          -p POSTGRESQL_DATABASE=mydatabase
    2. The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the oc create command:

      $ oc process -f my-rails-postgresql \
          -p POSTGRESQL_USER=bob \
          -p POSTGRESQL_DATABASE=mydatabase \
          | oc create -f -
    3. If you have large number of parameters, you can store them in a file and then pass this file to oc process:

      $ cat postgres.env
      POSTGRESQL_USER=bob
      POSTGRESQL_DATABASE=mydatabase
      $ oc process -f my-rails-postgresql --param-file=postgres.env
    4. You can also read the environment from standard input by using "-" as the argument to --param-file:

      $ sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-

3.1.5. Modifying uploaded templates

You can edit a template that has already been uploaded to your project.

Procedure

  • Modify a template that has already been uploaded:

    $ oc edit template <template>

3.1.6. Using instant app and quick start templates

OpenShift Container Platform provides a number of default instant app and quick start templates to make it easy to quickly get started creating a new application for different languages. Templates are provided for Rails (Ruby), Django (Python), Node.js, CakePHP (PHP), and Dancer (Perl). Your cluster administrator must create these templates in the default, global openshift project so you have access to them.

By default, the templates build using a public source repository on GitHub that contains the necessary application code.

Procedure

  1. You can list the available default instant app and quick start templates with:

    $ oc get templates -n openshift
  2. To modify the source and build your own version of the application:

    1. Fork the repository referenced by the template’s default SOURCE_REPOSITORY_URL parameter.
    2. Override the value of the SOURCE_REPOSITORY_URL parameter when creating from the template, specifying your fork instead of the default value.

      By doing this, the build configuration created by the template now points to your fork of the application code, and you can modify the code and rebuild the application at will.

Note

Some of the instant app and quick start templates define a database deployment configuration. The configuration they define uses ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data is lost if the database pod restarts for any reason.

3.1.6.1. Quick start templates

A quick start template is a basic example of an application running on OpenShift Container Platform. Quick starts come in a variety of languages and frameworks, and are defined in a template, which is constructed from a set of services, build configurations, and deployment configurations. This template references the necessary images and source repositories to build and deploy the application.

To explore a quick start, create an application from a template. Your administrator must have already installed these templates in your OpenShift Container Platform cluster, in which case you can simply select it from the web console.

Quick starts refer to a source repository that contains the application source code. To customize the quick start, fork the repository and, when creating an application from the template, substitute the default source repository name with your forked repository. This results in builds that are performed using your source code instead of the provided example source. You can then update the code in your source repository and launch a new build to see the changes reflected in the deployed application.

3.1.6.1.1. Web framework quick start templates

These quick start templates provide a basic application of the indicated framework and language:

  • CakePHP: a PHP web framework that includes a MySQL database
  • Dancer: a Perl web framework that includes a MySQL database
  • Django: a Python web framework that includes a PostgreSQL database
  • NodeJS: a NodeJS web application that includes a MongoDB database
  • Rails: a Ruby web framework that includes a PostgreSQL database

3.1.7. Writing templates

You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects.

The following is an example of a simple template object definition (YAML):

apiVersion: template.openshift.io/v1
kind: Template
metadata:
  name: redis-template
  annotations:
    description: "Description"
    iconClass: "icon-redis"
    tags: "database,nosql"
objects:
- apiVersion: v1
  kind: Pod
  metadata:
    name: redis-master
  spec:
    containers:
    - env:
      - name: REDIS_PASSWORD
        value: ${REDIS_PASSWORD}
      image: dockerfile/redis
      name: master
      ports:
      - containerPort: 6379
        protocol: TCP
parameters:
- description: Password used for Redis authentication
  from: '[A-Z0-9]{8}'
  generate: expression
  name: REDIS_PASSWORD
labels:
  redis: master

3.1.7.1. Writing the template description

The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on.

The following is an example of template description metadata:

kind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: cakephp-mysql-example 1
  annotations:
    openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2
    description: >-
      An example CakePHP application with a MySQL database. For more information
      about using this template, including OpenShift considerations, see
      https://github.com/sclorg/cakephp-ex/blob/master/README.md.


      WARNING: Any data stored will be lost upon pod destruction. Only use this
      template for testing." 3
    openshift.io/long-description: >-
      This template defines resources needed to develop a CakePHP application,
      including a build configuration, application DeploymentConfig, and
      database DeploymentConfig.  The database is stored in
      non-persistent storage, so this configuration should be used for
      experimental purposes only. 4
    tags: "quickstart,php,cakephp" 5
    iconClass: icon-php 6
    openshift.io/provider-display-name: "Red Hat, Inc." 7
    openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8
    openshift.io/support-url: "https://access.redhat.com" 9
message: "Your admin credentials are ${ADMIN_USERNAME}:${ADMIN_PASSWORD}" 10
1
The unique name of the template.
2
A brief, user-friendly name, which can be employed by user interfaces.
3
A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs.
4
Additional template description. This may be displayed by the service catalog, for example.
5
Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the id and categoryAliases in CATALOG_CATEGORIES in the console constants file. The categories can also be customized for the whole cluster.
6
An icon to be displayed with your template in the web console.

Example 3.1. Available icons

  • icon-3scale
  • icon-aerogear
  • icon-amq
  • icon-angularjs
  • icon-ansible
  • icon-apache
  • icon-beaker
  • icon-camel
  • icon-capedwarf
  • icon-cassandra
  • icon-catalog-icon
  • icon-clojure
  • icon-codeigniter
  • icon-cordova
  • icon-datagrid
  • icon-datavirt
  • icon-debian
  • icon-decisionserver
  • icon-django
  • icon-dotnet
  • icon-drupal
  • icon-eap
  • icon-elastic
  • icon-erlang
  • icon-fedora
  • icon-freebsd
  • icon-git
  • icon-github
  • icon-gitlab
  • icon-glassfish
  • icon-go-gopher
  • icon-golang
  • icon-grails
  • icon-hadoop
  • icon-haproxy
  • icon-helm
  • icon-infinispan
  • icon-jboss
  • icon-jenkins
  • icon-jetty
  • icon-joomla
  • icon-jruby
  • icon-js
  • icon-knative
  • icon-kubevirt
  • icon-laravel
  • icon-load-balancer
  • icon-mariadb
  • icon-mediawiki
  • icon-memcached
  • icon-mongodb
  • icon-mssql
  • icon-mysql-database
  • icon-nginx
  • icon-nodejs
  • icon-openjdk
  • icon-openliberty
  • icon-openshift
  • icon-openstack
  • icon-other-linux
  • icon-other-unknown
  • icon-perl
  • icon-phalcon
  • icon-php
  • icon-play
  • iconpostgresql
  • icon-processserver
  • icon-python
  • icon-quarkus
  • icon-rabbitmq
  • icon-rails
  • icon-redhat
  • icon-redis
  • icon-rh-integration
  • icon-rh-spring-boot
  • icon-rh-tomcat
  • icon-ruby
  • icon-scala
  • icon-serverlessfx
  • icon-shadowman
  • icon-spring-boot
  • icon-spring
  • icon-sso
  • icon-stackoverflow
  • icon-suse
  • icon-symfony
  • icon-tomcat
  • icon-ubuntu
  • icon-vertx
  • icon-wildfly
  • icon-windows
  • icon-wordpress
  • icon-xamarin
  • icon-zend
7
The name of the person or organization providing the template.
8
A URL referencing further documentation for the template.
9
A URL where support can be obtained for the template.
10
An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any next-steps documentation that users should follow.

3.1.7.2. Writing template labels

Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template.

The following is an example of template object labels:

kind: "Template"
apiVersion: "v1"
...
labels:
  template: "cakephp-mysql-example" 1
  app: "${NAME}" 2
1
A label that is applied to all objects created from this template.
2
A parameterized label that is also applied to all objects created from this template. Parameter expansion is carried out on both label keys and values.

3.1.7.3. Writing template parameters

Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways:

  • As a string value by placing values in the form ${PARAMETER_NAME} in any string field in the template.
  • As a JSON or YAML value by placing values in the form ${{PARAMETER_NAME}} in place of any field in the template.

When using the ${PARAMETER_NAME} syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://${PARAMETER_1}${PARAMETER_2}". Both parameter values are substituted and the resulting value is a quoted string.

When using the ${{PARAMETER_NAME}} syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string.

A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template.

A default value can be provided, which is used if you do not supply a different value:

The following is an example of setting an explicit value as the default value:

parameters:
  - name: USERNAME
    description: "The user name for Joe"
    value: joe

Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value:

parameters:
  - name: PASSWORD
    description: "The random user password"
    generate: expression
    from: "[a-zA-Z0-9]{12}"

In the previous example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers.

The syntax available is not a full regular expression syntax. However, you can use \w, \d, \a, and \A modifiers:

  • [\w]{10} produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to [a-zA-Z0-9_]{10}.
  • [\d]{10} produces 10 numbers. This is equal to [0-9]{10}.
  • [\a]{10} produces 10 alphabetical characters. This is equal to [a-zA-Z]{10}.
  • [\A]{10} produces 10 punctuation or symbol characters. This is equal to [~!@#$%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10}.
Note

Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent:

Example YAML template with a modifier

  parameters:
  - name: singlequoted_example
    generate: expression
    from: '[\A]{10}'
  - name: doublequoted_example
    generate: expression
    from: "[\\A]{10}"

Example JSON template with a modifier

{
    "parameters": [
       {
        "name": "json_example",
        "generate": "expression",
        "from": "[\\A]{10}"
       }
    ]
}

Here is an example of a full template with parameter definitions and references:

kind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: my-template
objects:
  - kind: BuildConfig
    apiVersion: build.openshift.io/v1
    metadata:
      name: cakephp-mysql-example
      annotations:
        description: Defines how to build the application
    spec:
      source:
        type: Git
        git:
          uri: "${SOURCE_REPOSITORY_URL}" 1
          ref: "${SOURCE_REPOSITORY_REF}"
        contextDir: "${CONTEXT_DIR}"
  - kind: DeploymentConfig
    apiVersion: apps.openshift.io/v1
    metadata:
      name: frontend
    spec:
      replicas: "${{REPLICA_COUNT}}" 2
parameters:
  - name: SOURCE_REPOSITORY_URL 3
    displayName: Source Repository URL 4
    description: The URL of the repository with your application source code 5
    value: https://github.com/sclorg/cakephp-ex.git 6
    required: true 7
  - name: GITHUB_WEBHOOK_SECRET
    description: A secret string used to configure the GitHub webhook
    generate: expression 8
    from: "[a-zA-Z0-9]{40}" 9
  - name: REPLICA_COUNT
    description: Number of replicas to run
    value: "2"
    required: true
message: "... The GitHub webhook secret is ${GITHUB_WEBHOOK_SECRET} ..." 10
1
This value is replaced with the value of the SOURCE_REPOSITORY_URL parameter when the template is instantiated.
2
This value is replaced with the unquoted value of the REPLICA_COUNT parameter when the template is instantiated.
3
The name of the parameter. This value is used to reference the parameter within the template.
4
The user-friendly name for the parameter. This is displayed to users.
5
A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console’s text standards. Do not make this a duplicate of the display name.
6
A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets.
7
Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value.
8
A parameter which has its value generated.
9
The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters.
10
Parameters can be included in the template message. This informs you about generated values.

3.1.7.4. Writing the template object list

The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier.

The following is an example of an object list:

kind: "Template"
apiVersion: "v1"
metadata:
  name: my-template
objects:
  - kind: "Service" 1
    apiVersion: "v1"
    metadata:
      name: "cakephp-mysql-example"
      annotations:
        description: "Exposes and load balances the application pods"
    spec:
      ports:
        - name: "web"
          port: 8080
          targetPort: 8080
      selector:
        name: "cakephp-mysql-example"
1
The definition of a service, which is created by this template.
Note

If an object definition metadata includes a fixed namespace field value, the field is stripped out of the definition during template instantiation. If the namespace field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace.

3.1.7.5. Marking a template as bindable

The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service.

Procedure

Template authors can prevent end users from binding against services provisioned from a given template.

  • Prevent end user from binding against services provisioned from a given template by adding the annotation template.openshift.io/bindable: "false" to the template.

3.1.7.6. Exposing template object fields

Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap, Secret, Service, and Route objects, and returns the values of the exposed fields when a user binds a service backed by the broker.

To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose- or template.openshift.io/base64-expose- to the object in the template.

Each annotation key, with its prefix removed, is passed through to become a key in a bind response.

Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind response.

Note

Bind response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name — beginning with a character A-Z, a-z, or _, and being followed by zero or more characters A-Z, a-z, 0-9, or _.

Note

Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as ., @, and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap datum named my.key, the required JSONPath expression would be {.data['my\.key']}. Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}".

The following is an example of different objects' fields being exposed:

kind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: my-template
objects:
- kind: ConfigMap
  apiVersion: v1
  metadata:
    name: my-template-config
    annotations:
      template.openshift.io/expose-username: "{.data['my\\.username']}"
  data:
    my.username: foo
- kind: Secret
  apiVersion: v1
  metadata:
    name: my-template-config-secret
    annotations:
      template.openshift.io/base64-expose-password: "{.data['password']}"
  stringData:
    password: <password>
- kind: Service
  apiVersion: v1
  metadata:
    name: my-template-service
    annotations:
      template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}"
  spec:
    ports:
    - name: "web"
      port: 8080
- kind: Route
  apiVersion: route.openshift.io/v1
  metadata:
    name: my-template-route
    annotations:
      template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}"
  spec:
    path: mypath

An example response to a bind operation given the above partial template follows:

{
  "credentials": {
    "username": "foo",
    "password": "YmFy",
    "service_ip_port": "172.30.12.34:8080",
    "uri": "http://route-test.router.default.svc.cluster.local/mypath"
  }
}

Procedure

  • Use the template.openshift.io/expose- annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data.
  • If you want to return binary data, use the template.openshift.io/base64-expose- annotation instead to base64 encode the data before it is returned.

3.1.7.7. Waiting for template readiness

Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance API is considered complete.

To use this feature, mark one or more objects of kind Build, BuildConfig, Deployment, DeploymentConfig, Job, or StatefulSet in a template with the following annotation:

"template.alpha.openshift.io/wait-for-ready": "true"

Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails.

For the purposes of instantiation, readiness and failure of each object kind are defined as follows:

KindReadinessFailure

Build

Object reports phase complete.

Object reports phase canceled, error, or failed.

BuildConfig

Latest associated build object reports phase complete.

Latest associated build object reports phase canceled, error, or failed.

Deployment

Object reports new replica set and deployment available. This honors readiness probes defined on the object.

Object reports progressing condition as false.

DeploymentConfig

Object reports new replication controller and deployment available. This honors readiness probes defined on the object.

Object reports progressing condition as false.

Job

Object reports completion.

Object reports that one or more failures have occurred.

StatefulSet

Object reports all replicas ready. This honors readiness probes defined on the object.

Not applicable.

The following is an example template extract, which uses the wait-for-ready annotation. Further examples can be found in the OpenShift Container Platform quick start templates.

kind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: my-template
objects:
- kind: BuildConfig
  apiVersion: build.openshift.io/v1
  metadata:
    name: ...
    annotations:
      # wait-for-ready used on BuildConfig ensures that template instantiation
      # will fail immediately if build fails
      template.alpha.openshift.io/wait-for-ready: "true"
  spec:
    ...
- kind: DeploymentConfig
  apiVersion: apps.openshift.io/v1
  metadata:
    name: ...
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
  spec:
    ...
- kind: Service
  apiVersion: v1
  metadata:
    name: ...
  spec:
    ...

Additional recommendations

  • Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly.
  • Avoid referencing the latest tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag.
  • A good template builds and deploys cleanly without requiring modifications after the template is deployed.

3.1.7.8. Creating a template from existing objects

Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form.

Procedure

  • Export objects in a project in YAML form:

    $ oc get -o yaml all > <yaml_filename>

    You can also substitute a particular resource type or multiple resources instead of all. Run oc get -h for more examples.

    The object types included in oc get -o yaml all are:

    • BuildConfig
    • Build
    • DeploymentConfig
    • ImageStream
    • Pod
    • ReplicationController
    • Route
    • Service
Note

Using the all alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources.

3.2. Creating applications by using the Developer perspective

The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform:

  • Getting started resources: Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu kebab .

    • Creating applications using samples: Use existing code samples to get started with creating applications on the OpenShift Container Platform.
    • Build with guided documentation: Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies.
    • Explore new developer features: Explore the new features and resources within the Developer perspective.
  • Developer catalog: Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project.

    • All Services: Browse the catalog to discover services across OpenShift Container Platform.
    • Database: Select the required database service and add it to your application.
    • Operator Backed: Select and deploy the required Operator-managed service.
    • Helm chart: Select the required Helm chart to simplify deployment of applications and services.
    • Devfile: Select a devfile from the Devfile registry to declaratively define a development environment.
    • Event Source: Select an event source to register interest in a class of events from a particular system.

      Note

      The Managed services option is also available if the RHOAS Operator is installed.

  • Git repository: Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git, From Devfile, or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform.
  • Container images: Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform.
  • Pipelines: Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform.
  • Serverless: Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform.

    • Channel: Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations.
  • Samples: Explore the available sample applications to create, build, and deploy an application quickly.
  • Quick Starts: Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks.
  • From Local Machine: Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily.

    • Import YAML: Upload a YAML file to create and define resources for building and deploying applications.
    • Upload JAR file: Upload a JAR file to build and deploy Java applications.
  • Share my Project: Use this option to add or remove users to a project and provide accessibility options to them.
  • Helm Chart repositories: Use this option to add Helm Chart repositories in a namespace.
  • Re-ordering of resources: Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides.

Note that certain options, such as Pipelines, Event Source, and Import Virtual Machines, are displayed only when the OpenShift Pipelines Operator, OpenShift Serverless Operator, and OpenShift Virtualization Operator are installed, respectively.

3.2.1. Prerequisites

To create applications using the Developer perspective ensure that:

To create serverless applications, in addition to the preceding prerequisites, ensure that:

3.2.2. Creating sample applications

You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console and are in the Developer perspective.

Procedure

  1. In the +Add view, click the Samples tile to see the Samples page.
  2. On the Samples page, select one of the available sample applications to see the Create Sample Application form.
  3. In the Create Sample Application Form:

    • In the Name field, the deployment name is displayed by default. You can modify this name as required.
    • In the Builder Image Version, a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list.
    • A sample Git repository URL is added by default.
  4. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application.

3.2.3. Creating applications by using Quick Starts

The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console and are in the Developer perspective.

Procedure

  1. In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page.
  2. In the Quick Starts page, click the tile for the quick start that you want to use.
  3. Click Start to begin the quick start.
  4. Perform the steps that are displayed.

3.2.4. Importing a codebase from Git to create an application

You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub.

The following procedure walks you through the From Git option in the Developer perspective to create an application.

Procedure

  1. In the +Add view, click From Git in the Git Repository tile to see the Import from git form.
  2. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex. The URL is then validated.
  3. Optional: You can click Show Advanced Git Options to add details such as:

    • Git Reference to point to code in a specific branch, tag, or commit to be used to build the application.
    • Context Dir to specify the subdirectory for the application source code you want to use to build the application.
    • Source Secret to create a Secret Name with credentials for pulling your source code from a private repository.
  4. Optional: You can import a Devfile, a Dockerfile, Builder Image, or a Serverless Function through your Git repository to further customize your deployment.

    • If your Git repository contains a Devfile, a Dockerfile, a Builder Image, or a func.yaml, it is automatically detected and populated on the respective path fields.
    • If a Devfile, a Dockerfile, or a Builder Image are detected in the same repository, the Devfile is selected by default.
    • If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function.
    • Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL.
    • To edit the file import type and select a different strategy, click Edit import strategy option.
    • If multiple Devfiles, a Dockerfiles, or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory.
  5. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected.

    1. Optional: Use the Builder Image Version drop-down to specify a version.
    2. Optional: Use the Edit import strategy to select a different strategy.
    3. Optional: For the Node.js builder image, use the Run command field to override the command to run the application.
  6. In the General section:

    1. In the Application field, enter a unique name for the application grouping, for example, myapp. Ensure that the application name is unique in a namespace.
    2. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned.

      Note

      The resource name must be unique in a namespace. Modify the resource name if you get an error.

  7. In the Resources section, select:

    • Deployment, to create an application in plain Kubernetes style.
    • Deployment Config, to create an OpenShift Container Platform style application.
    • Serverless Deployment, to create a Knative service.

      Note

      To set the default resource preference for importing an application, go to User Preferences Applications Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation.

  8. In the Pipelines section, select Add Pipeline, and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application.

    Note

    The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled:

    • Pipeline operator is installed
    • pipelines-as-code is enabled
    • .tekton directory is detected in the Git repository
  9. Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option:

    1. Go to Settings Webhooks and click Add webhook.
    2. Set the Payload URL to the Pipelines as Code controller public URL.
    3. Select the content type as application/json.
    4. Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret.
    5. Click Let me select individual events and select these events: Commit comments, Issue comments, Pull request, and Pushes.
    6. Click Add webhook.
  10. Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL.

    If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose.

  11. Optional: You can use the following advanced options to further customize your application:

    Routing

    By clicking the Routing link, you can perform the following actions:

    • Customize the hostname for the route.
    • Specify the path the router watches.
    • Select the target port for the traffic from the drop-down list.
    • Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists.

      Note

      For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used.

    Domain mapping

    If you are creating a Serverless Deployment, you can add a custom domain mapping to the Knative service during creation.

    • In the Advanced options section, click Show advanced Routing options.

      • If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu.
      • If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com, the Create option is Create "example.com".
    Health Checks

    Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required.

    To customize the health probes:

    • Click Add Readiness Probe, if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe.
    • Click Add Liveness Probe, if required, modify the parameters to check if a container is still running, and select the check mark to add the probe.
    • Click Add Startup Probe, if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe.

      For each of the probes, you can specify the request type - HTTP GET, Container Command, or TCP Socket, from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value.

    Build Configuration and Deployment

    Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables.

    For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource.

    Scaling

    Click the Scaling link to define the number of pods or instances of the application you want to deploy initially.

    If you are creating a serverless deployment, you can also configure the following settings:

    • Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting.
    • Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting.
    • Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time.
    • Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time.
    • Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic.
    • Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s. This is also known as the stable window.
    Resource Limit
    Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running.
    Labels
    Click the Labels link to add custom labels to your application.
  12. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view.

3.2.5. Creating applications by deploying container image

You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster.

Prerequisites

  • You have logged in to the OpenShift Container Platform web console and are in the Developer perspective.

Procedure

  1. In the +Add view, click Container images to view the Deploy Images page.
  2. In the Image section:

    1. Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry.
    2. Select an icon for your image in the Runtime icon tab.
  3. In the General section:

    1. In the Application name field, enter a unique name for the application grouping.
    2. In the Name field, enter a unique name to identify the resources created for this component.
  4. In the Resource type section, select the resource type to generate:

    1. Select Deployment to enable declarative updates for Pod and ReplicaSet objects.
    2. Select DeploymentConfig to define the template for a Pod object, and manage deploying new images and configuration sources.
    3. Select Serverless Deployment to enable scaling to zero when idle.
  5. Click Create. You can view the build status of the application in the Topology view.

3.2.6. Deploying a Java application by uploading a JAR file

You can use the web console Developer perspective to upload a JAR file by using the following options:

  • Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application.
  • Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application.
  • Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application.

Prerequisites

  • The Cluster Samples Operator must be installed by a cluster administrator.
  • You have access to the OpenShift Container Platform web console and are in the Developer perspective.

Procedure

  1. In the Topology view, right-click anywhere to view the Add to Project menu.
  2. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view.
  3. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form.
  4. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list.
  5. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling.
  6. In the Name field, enter a unique component name for the associated resources.
  7. Optional: Use the Resource type drop-down list to change the resource type.
  8. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application.
  9. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs.
Note

If you attempt to close the browser tab while the build is running, a web alert is displayed.

After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view.

3.2.7. Using the Devfile registry to access devfiles

You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry. A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry, you can use a preconfigured devfile to create an application.

Procedure

  1. Navigate to Developer Perspective +Add Developer Catalog All Services. A list of all the available services in the Developer Catalog is displayed.
  2. Under Type, click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description.
  3. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile.
  4. Click Create to create an application and view the application in the Topology view.

3.2.8. Using the Developer Catalog to add services or components to your application

You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog.

Procedure

  1. In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog.
  2. Under All Services, select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service.
  3. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view.

    Figure 3.1. MariaDB in Topology

    odc devcatalog toplogy

3.2.9. Additional resources

3.3. Creating applications from installed Operators

Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator.

This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console.

Additional resources

  • See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform.

3.3.1. Creating an etcd cluster using an Operator

This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM).

Prerequisites

  • Access to an OpenShift Container Platform 4.17 cluster.
  • The etcd Operator already installed cluster-wide by an administrator.

Procedure

  1. Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd.
  2. Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator.

    Tip

    You can get this list from the CLI using:

    $ oc get csv
  3. On the Installed Operators page, click the etcd Operator to view more details and available actions.

    As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet, but contain logic specific to managing etcd.

  4. Create a new etcd cluster:

    1. In the etcd Cluster API box, click Create instance.
    2. The next page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster.
  5. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator.

    Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.

  6. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:

    $ oc policy add-role-to-user edit <user> -n <target_project>

You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications.

3.4. Creating applications by using the CLI

You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI.

The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates.

3.4.1. Creating an application from source code

With the new-app command you can create applications from source code in a local or remote Git repository.

The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image.

OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image.

3.4.1.1. Local

To create an application from a Git repository in a local directory:

$ oc new-app /<path to source code>
Note

If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build.

3.4.1.2. Remote

To create an application from a remote Git repository:

$ oc new-app https://github.com/sclorg/cakephp-ex

To create an application from a private remote Git repository:

$ oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret
Note

If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository.

You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory:

$ oc new-app https://github.com/sclorg/s2i-ruby-container.git \
    --context-dir=2.0/test/puma-test-app

Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL:

$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4

3.4.1.3. Build strategy detection

OpenShift Container Platform automatically determines which build strategy to use by detecting certain files:

  • If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy.

    Note

    The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead.

  • If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy.
  • If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy.

Override the automatically detected build strategy by setting the --strategy flag to docker, pipeline, or source.

$ oc new-app /home/user/code/myapp --strategy=docker
Note

The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v.

3.4.1.4. Language detection

If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository:

Table 3.1. Languages detected by new-app
LanguageFiles

dotnet

project.json, *.csproj

jee

pom.xml

nodejs

app.json, package.json

perl

cpanfile, index.pl

php

composer.json, index.php

python

requirements.txt, setup.py

ruby

Gemfile, Rakefile, config.ru

scala

build.sbt

golang

Godeps, main.go

After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name.

You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out.

For example, to use the myproject/my-ruby imagestream with the source in a remote repository:

$ oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git

To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository:

$ oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app
Note

Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax.

The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available.

The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image.

3.4.2. Creating an application from an image

You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server.

The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument.

Note

If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes.

3.4.2.1. Docker Hub MySQL image

Create an application from the Docker Hub MySQL image, for example:

$ oc new-app mysql

3.4.2.2. Image in a private registry

Create an application using an image in a private registry, specify the full container image specification:

$ oc new-app myregistry:5000/example/myimage

3.4.2.3. Existing image stream and optional image stream tag

Create an application from an existing image stream and optional image stream tag:

$ oc new-app my-stream:v1

3.4.3. Creating an application from a template

You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application.

Upload an application template to your current project’s template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json:

$ oc create -f examples/sample-app/application-template-stibuild.json

Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample:

$ oc new-app ruby-helloworld-sample

To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file argument. For example:

$ oc new-app -f examples/sample-app/application-template-stibuild.json

3.4.3.1. Template parameters

When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template:

$ oc new-app ruby-helloworld-sample \
    -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword

You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=-. The following is an example file called helloworld.params:

ADMIN_USERNAME=admin
ADMIN_PASSWORD=mypassword

Reference the parameters in the file when instantiating a template:

$ oc new-app ruby-helloworld-sample --param-file=helloworld.params

3.4.4. Modifying application creation

The new-app command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior.

Table 3.2. new-app output objects
ObjectDescription

BuildConfig

A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location.

ImageStreams

For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app, then an image stream is created for that image as well.

DeploymentConfig

A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object .

Service

The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services.

Other

Other objects can be generated when instantiating templates, according to the template.

3.4.4.1. Specifying environment variables

When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time:

$ oc new-app openshift/postgresql-92-centos7 \
    -e POSTGRESQL_USER=user \
    -e POSTGRESQL_DATABASE=db \
    -e POSTGRESQL_PASSWORD=password

The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env:

POSTGRESQL_USER=user
POSTGRESQL_DATABASE=db
POSTGRESQL_PASSWORD=password

Read the variables from the file:

$ oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env

Additionally, environment variables can be given on standard input by using --env-file=-:

$ cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-
Note

Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument.

3.4.4.2. Specifying build environment variables

When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time:

$ oc new-app openshift/ruby-23-centos7 \
    --build-env HTTP_PROXY=http://myproxy.net:1337/ \
    --build-env GEM_HOME=~/.gem

The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env:

HTTP_PROXY=http://myproxy.net:1337/
GEM_HOME=~/.gem

Read the variables from the file:

$ oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env

Additionally, environment variables can be given on standard input by using --build-env-file=-:

$ cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-

3.4.4.3. Specifying labels

When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application.

$ oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world

3.4.4.4. Viewing the output without creation

To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Container Platform objects.

To output new-app artifacts to a file, run the following:

$ oc new-app https://github.com/openshift/ruby-hello-world \
    -o yaml > myapp.yaml

Edit the file:

$ vi myapp.yaml

Create a new application by referencing the file:

$ oc create -f myapp.yaml

3.4.4.5. Creating objects with different names

Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command:

$ oc new-app https://github.com/openshift/ruby-hello-world --name=myapp

3.4.4.6. Creating objects in a different project

Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument:

$ oc new-app https://github.com/openshift/ruby-hello-world -n myproject

3.4.4.7. Creating multiple objects

The new-app command allows creating multiple applications specifying multiple parameters to new-app. Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images.

To create an application from a source repository and a Docker Hub image:

$ oc new-app https://github.com/openshift/ruby-hello-world mysql
Note

If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator.

3.4.4.8. Grouping images and source in a single pod

The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group:

$ oc new-app ruby+mysql

To deploy an image built from source and an external image together:

$ oc new-app \
    ruby~https://github.com/openshift/ruby-hello-world \
    mysql \
    --group=ruby+mysql

3.4.4.9. Searching for images, templates, and other inputs

To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP:

$ oc new-app --search php

3.4.4.10. Setting the import mode

To set the import mode when using oc new-app, add the --import-mode flag. This flag can be appended with Legacy or PreserveOriginal, which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively.

$ oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest  --import-mode=Legacy --name=test
$ oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest  --import-mode=PreserveOriginal --name=test

3.5. Creating applications using Ruby on Rails

Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform.

Warning

Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your previous steps to ensure that all the steps were run correctly.

3.5.1. Prerequisites

  • Basic Ruby and Rails knowledge.
  • Locally installed version of Ruby 2.0.0+, Rubygems, Bundler.
  • Basic Git knowledge.
  • Running instance of OpenShift Container Platform 4.
  • Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password.

3.5.2. Setting up the database

Rails applications are almost always used with a database. For local development use the PostgreSQL database.

Procedure

  1. Install the database:

    $ sudo yum install -y postgresql postgresql-server postgresql-devel
  2. Initialize the database:

    $ sudo postgresql-setup initdb

    This command creates the /var/lib/pgsql/data directory, in which the data is stored.

  3. Start the database:

    $ sudo systemctl start postgresql.service
  4. When the database is running, create your rails user:

    $ sudo -u postgres createuser -s rails

    Note that the user created has no password.

3.5.3. Writing your application

If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application.

Procedure

  1. Install the Rails gem:

    $ gem install rails

    Example output

    Successfully installed rails-4.3.0
    1 gem installed

  2. After you install the Rails gem, create a new application with PostgreSQL as your database:

    $ rails new rails-app --database=postgresql
  3. Change into your new application directory:

    $ cd rails-app
  4. If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile. If not, edit your Gemfile by adding the gem:

    gem 'pg'
  5. Generate a new Gemfile.lock with all your dependencies:

    $ bundle install
  6. In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter.

    Make sure you updated default section in the config/database.yml file, so it looks like this:

    default: &default
      adapter: postgresql
      encoding: unicode
      pool: 5
      host: localhost
      username: rails
      password: <password>
  7. Create your application’s development and test databases:

    $ rake db:create

    This creates development and test database in your PostgreSQL server.

3.5.3.1. Creating a welcome page

Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page.

To have a custom welcome page must do following steps:

  • Create a controller with an index action.
  • Create a view page for the welcome controller index action.
  • Create a route that serves applications root page with the created controller and view.

Rails offers a generator that completes all necessary steps for you.

Procedure

  1. Run Rails generator:

    $ rails generate controller welcome index

    All the necessary files are created.

  2. edit line 2 in config/routes.rb file as follows:

    root 'welcome#index'
  3. Run the rails server to verify the page is available:

    $ rails server

    You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug.

3.5.3.2. Configuring application for OpenShift Container Platform

To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation.

Procedure

  • Edit the default section in your config/database.yml with pre-defined variables as follows:

    Sample config/database YAML file

    <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %>
    <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %>
    <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %>
    
    default: &default
      adapter: postgresql
      encoding: unicode
      # For details on connection pooling, see rails configuration guide
      # http://guides.rubyonrails.org/configuring.html#database-pooling
      pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %>
      username: <%= user %>
      password: <%= password %>
      host: <%= ENV["#{db_service}_SERVICE_HOST"] %>
      port: <%= ENV["#{db_service}_SERVICE_PORT"] %>
      database: <%= ENV["POSTGRESQL_DATABASE"] %>

3.5.3.3. Storing your application in Git

Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git if you do not already have it.

Prerequisites

  • Install git.

Procedure

  1. Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like:

    $ ls -1

    Example output

    app
    bin
    config
    config.ru
    db
    Gemfile
    Gemfile.lock
    lib
    log
    public
    Rakefile
    README.rdoc
    test
    tmp
    vendor

  2. Run the following commands in your Rails app directory to initialize and commit your code to git:

    $ git init
    $ git add .
    $ git commit -m "initial commit"

    After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository.

  3. Set the remote that points to your git repository:

    $ git remote add origin git@github.com:<namespace/repository-name>.git
  4. Push your application to your remote git repository.

    $ git push

3.5.4. Deploying your application to OpenShift Container Platform

You can deploy you application to OpenShift Container Platform.

After creating the rails-app project, you are automatically switched to the new project namespace.

Deploying your application in OpenShift Container Platform involves three steps:

  • Creating a database service from OpenShift Container Platform’s PostgreSQL image.
  • Creating a frontend service from OpenShift Container Platform’s Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service.
  • Creating a route for your application.

Procedure

  • To deploy your Ruby on Rails application, create a new project for the application:

    $ oc new-project rails-app --description="My Rails application" --display-name="Rails Application"

3.5.4.1. Creating the database service

Your Rails application expects a running database service. For this service use PostgreSQL database image.

To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows:

  • POSTGRESQL_DATABASE
  • POSTGRESQL_USER
  • POSTGRESQL_PASSWORD

Setting these variables ensures:

  • A database exists with the specified name.
  • A user exists with the specified name.
  • The user can access the specified database with the specified password.

Procedure

  1. Create the database service:

    $ oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password

    To also set the password for the database administrator, append to the previous command with:

    -e POSTGRESQL_ADMIN_PASSWORD=admin_pw
  2. Watch the progress:

    $ oc get pods --watch

3.5.4.2. Creating the frontend service

To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives.

Procedure

  1. Create the frontend service and specify database related environment variables that were setup when creating the database service:

    $ oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql

    With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app.

  2. Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config:

    $ oc get dc rails-app -o json

    You should see the following section:

    Example output

    env": [
        {
            "name": "POSTGRESQL_USER",
            "value": "username"
        },
        {
            "name": "POSTGRESQL_PASSWORD",
            "value": "password"
        },
        {
            "name": "POSTGRESQL_DATABASE",
            "value": "db_name"
        },
        {
            "name": "DATABASE_SERVICE_NAME",
            "value": "postgresql"
        }
    
    ],

  3. Check the build process:

    $ oc logs -f build/rails-app-1
  4. After the build is complete, look at the running pods in OpenShift Container Platform:

    $ oc get pods

    You should see a line starting with myapp-<number>-<hash>, and that is your application running in OpenShift Container Platform.

  5. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this:

    • Manually from the running frontend container:

      • Exec into frontend container with rsh command:

        $ oc rsh <frontend_pod_id>
      • Run the migration from inside the container:

        $ RAILS_ENV=production bundle exec rake db:migrate

        If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable.

    • By adding pre-deployment lifecycle hooks in your template.

3.5.4.3. Creating a route for your application

You can expose a service to create a route for your application.

Procedure

  • To expose a service by giving it an externally-reachable hostname like www.example.com use OpenShift Container Platform route. In your case you need to expose the frontend service by typing:

    $ oc expose service rails-app --hostname=www.example.com
Warning

Ensure the hostname you specify resolves into the IP address of the router.

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.