이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 4. Cartridges vs Images


4.1. Overview

Cartridges in OpenShift v2 were the focal point for building applications. Each cartridge provided the required libraries, source code, build mechanisms, connection logic, and routing logic along with a preconfigured environment to run the components of your applications.

However, with cartridges there was no clear distinction between the developer content and the cartridge content. For example, one limitation was that you did not have ownership of the home directory on each gear of your application. Another disadvantage was that cartridges were not the best distribution mechanism for large binaries. While you could use external dependencies from within cartridges, doing so would lose the benefits of encapsulation.

4.2. Dependencies

Cartridge dependencies were defined with Configure-Order or Requires in the cartridge manifest. OpenShift v3 uses a declarative model where pods bring themselves in line with a predefined state. Explicit dependencies that are applied are done at runtime rather than just install time ordering.

For example, you might require another service is available before you start. Such a dependency check is always applicable and not just when you create the two components. Thus, pushing the dependency into a runtime dependency enables the system to stay healthy over time.

4.3. Collocation

Whereas cartridges in OpenShift v2 could be collocated within gears, images in OpenShift v3 are mapped 1:1 with containers. Containers use pods as their collocation mechanism.

4.4. Source Code

In OpenShift v2, applications were required to have at least one web framework with one git repo. With OpenShift v3, you can choose which images are built from source and that source can be located outside of OpenShift itself. Because the source is disconnected from the images, the choice of image and source are distinct operations with source being optional.

4.5. Build

In OpenShift v2, builds happened in place as part of your application gears. This meant downtime for non scaled applications due to resource constraints. In v3, builds happen in separate containers.

Another change for builds is how those builds are transmitted between containers. In v2, build results were rsynced between gears. In v3, build results are first committed as an immutable image and published to an internal registry. That image is then available to launch on any of the nodes in the cluster or rollback to at a future date.

All of this means that builds are no longer part of the cartridges and are provided as distinct choices.

4.6. Routing

Routing is another area that has undergone a lot of formalization in OpenShift v3. In v2, you had to make choices up front as to whether your application was scalable, and a secondary choice as to whether the routing layer for your app was enabled for high availability. In OpenShift v3, routes are first class objects that are HA capable simply by scaling up your application component to 2 or more replicas. There is never a need to recreate your application or change its DNS entry.

The routes themselves have also been moved up a level and are disconnected from the images themselves. Previously, cartridges defined a default set of routes and you could add additional aliases to your applications. With v3, you can use templates to setup 0-N routes for any image. These routes let you modify the scheme, host, and paths exposed as desired, and there is no distinction between system routes and user aliases.

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat