Chapter 10. Applications

download PDF

10.1. Introduction to Applications

When a new application is created, a URL with name of the application and the name of the domain is registered in DNS. A copy of the application code is checked out locally into a folder with the same name as the application. Note that different types of applications may require different folder structures. Application components are run on gears.
With each new application that is created with the client tools, a remote Git repository is populated with the selected cartridge, which is then cloned to the current directory on the local machine. The host name and IP address of the application are also added to the list of known hosts in the ~/.ssh/known_hosts directory.
The following table describes each component that makes up an OpenShift Enterprise application.
Table 10.1. Application Components
Component Description
Domain The domain provides a unique group identifier for all the applications of a specific user. The domain is not directly related to DNS; instead, it is appended to the application name to form a final application URL of the form
Application Name The name of the application is selected by a user. The final URL to access the application is of the form
Alias DNS names can be provided for the application by registering an alias with OpenShift Enterprise and pointing the DNS entry to the OpenShift Enterprise servers.
Git repository A Git repository is used to modify application code locally. After the code is applied, the git push command is required to deploy the revised code.
OpenShift Enterprise provides dedicated /var/tmp and /tmp directories for each user application. The /var/tmp directory is a symbolic link to /tmp. Each /tmp directory is completely isolated from the /tmp directories of all other applications. Files that are untouched for any period of ten days are automatically deleted from these directories.

10.1.1. Application Life Cycle

The following table describes the general life cycle of most OpenShift Enterprise applications.
Table 10.2. Application Life Cycle
Process Description
Code Develop the application code with the desired language and tools. Continuously push the application code to the applications remote Git source code repository.
Build OpenShift Enterprise supports various build mechanisms, whether it is a simple script, a personal Jenkins continuous integration server, or an external build system.
Deploy Every application is composed of cartridges that simplify server maintenance and configuration. OpenShift Enterprise supports various technologies to provision the required services automatically.
Manage OpenShift Enterprise allows real-time monitoring, debugging, and tuning of applications. Applications are scaled automatically depending on web traffic.

10.1.2. Scalable Applications

Applications can be created as either scalable or not scalable. Scalable applications feature the load-balancing proxy (HAProxy) gear, which is contained on the same gear as the web framework cartridge. The load-balancing proxy provides horizontal scaling by cloning the gears containing the framework cartridge and application data onto mulitiple gears.
Scalable applications are set to scale automatically by default. However, you can scale an application manually. See Section 11.6, “Scaling an Application Manually” for more information.
How Auto-scaling Works

Each application created on OpenShift Enterprise must have one web framework cartridge defined upon creation. For example, a PHP cartridge. When an application is defined as scalable, a second cartridge, the HAProxy cartridge, is added to the application. The HAProxy cartridge listens to all incoming web-page requests to an application and passes them to the web cartridge, following the defined guidelines for load monitoring.

When the number of web-page requests to an application increases, the HAProxy informs OpenShift Enterprise when an overload of requests is detected. A copy of the existing web cartridge and application data is then cloned to a separate gear. In such a case, the web cartridge has now been scaled up two times. This process is repeated as more web-page requests are detected by the HAProxy cartridge, and each time a copy of the web cartridge is created on a separate gear, the application scale factor increases by one.
When the application (by default) reaches three copies of the web framework cartridge, the HAProxy load-balancer disables routing to the framework cartridge located on the same gear. This gives the HAProxy cartridge full gear resource usage, which continues to route requests to the framework cartridges located on separate gears. Routing to the framework cartridge located with the HAProxy cartridge is enabled again once the application is scaled down.
Cartridges on Gears in a Scaling Application

Figure 10.1. Cartridges on Gears in a Scaling Application

10.1.3. Highly-Available Applications

Scalable applications duplicate application code onto multiple gears within your application. Any requests to your application go through the node router to the head (HAproxy) gear, which distributes requests to available framework gears.
Alternatively, when a scalable application is configured to be highly available, at least one load-balancer gear is ensured to survive in the event of an outage and the application remains accessible. High availability provides at least two load-balancing (HAproxy) gears on separate node hosts and any requests instead go through an external routing layer, bypass the node router, and are sent directly to HAproxy gears to be distributed to framework gears. During an outage, the routing layer ensures incoming traffic reaches an available HAproxy gear, and the HAproxy gears only route traffic to the framework gears that are available.
While you, as an application developer, can enable your application to be highly available, the OpenShift Enterprise infrastructure and configuration must be set to support highly-available applications. Contact your system administrator, or see the OpenShift Enterprise Administration Guide for more information on highly-available applications and the infrastructure needed to support them.
Enabling High Availability in Applications

At this time of writing, the only way to enable high availability in applications is through the REST API. See Section 11.7, “Making Applications Highly Available” for the curl command to make your applications highly available, or the relevant section in the OpenShift Enterprise REST API Guide for more information:

Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.