Images
Creating and managing images and imagestreams in OpenShift Container Platform
Abstract
Chapter 1. Overview of images Copy linkLink copied to clipboard!
1.1. Understanding containers, images, and image streams Copy linkLink copied to clipboard!
Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name.
1.2. Images Copy linkLink copied to clipboard!
Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images. An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities.
You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image.
You can use the podman or
docker
Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as
fd44297e2ddb050ec4f…
fd44297e2ddb
1.3. Image registry Copy linkLink copied to clipboard!
An image registry is a content server that can store and serve container images. For example:
registry.redhat.io
A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at
registry.redhat.io
1.4. Image repository Copy linkLink copied to clipboard!
An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository:
docker.io/openshift/jenkins-2-centos7
1.5. Image tags Copy linkLink copied to clipboard!
An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here
:v3.11.59-2
registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2
You can add additional tags to an image. For example, an image might be assigned the tags
:v3.11.59-2
:latest
OpenShift Container Platform provides the
oc tag
docker tag
1.6. Image IDs Copy linkLink copied to clipboard!
An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example:
docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324
1.7. Containers Copy linkLink copied to clipboard!
The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image.
Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads.
The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations.
Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers.
Tools such as podman can be used to replace
docker
podman
1.8. Why use imagestreams Copy linkLink copied to clipboard!
An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes.
Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository.
You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively.
For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image.
However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the previous, presumably known good image.
The source images can be stored in any of the following:
- OpenShift Container Platform’s integrated registry.
- An external registry, for example registry.redhat.io or Quay.io.
- Other image streams in the OpenShift Container Platform cluster.
When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image.
The image stream metadata is stored in the etcd instance along with other cluster information.
Using image streams has several significant benefits:
- You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line.
- You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects.
- You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration.
- You can share images using fine-grained access control and quickly distribute images across your teams.
- If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly.
- You can configure security around who can view and use the images through permissions on the image stream objects.
- Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams.
You can manage image streams, use image streams with Kubernetes resources, and trigger updates on image stream updates.
1.9. Image stream tags Copy linkLink copied to clipboard!
An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag.
1.10. Image stream images Copy linkLink copied to clipboard!
An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier.
1.11. Image stream triggers Copy linkLink copied to clipboard!
An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those.
1.12. How you can use the Cluster Samples Operator Copy linkLink copied to clipboard!
During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the
openshift
As a cluster administrator, you can use the Cluster Samples Operator to:
1.13. About templates Copy linkLink copied to clipboard!
A template is a definition of an object to be replicated. You can use templates to build and deploy configurations.
1.14. How you can use Ruby on Rails Copy linkLink copied to clipboard!
As a developer, you can use Ruby on Rails to:
Write your application:
- Set up a database.
- Create a welcome page.
- Configure your application for OpenShift Container Platform.
- Store your application in Git.
Deploy your application in OpenShift Container Platform:
- Create the database service.
- Create the frontend service.
- Create a route for your application.
Chapter 2. Configuring the Cluster Samples Operator Copy linkLink copied to clipboard!
The Cluster Samples Operator, which operates in the
openshift
2.1. Understanding the Cluster Samples Operator Copy linkLink copied to clipboard!
During installation, the Operator creates the default configuration object for itself and then creates the sample image streams and templates, including quick start templates.
To facilitate image stream imports from other registries that require credentials, a cluster administrator can create any additional secrets that contain the content of a Docker
config.json
openshift
The Cluster Samples Operator configuration is a cluster-wide resource, and the deployment is contained within the
openshift-cluster-samples-operator
The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. When each sample is created or updated, the Cluster Samples Operator includes an annotation that denotes the version of OpenShift Container Platform. The Operator uses this annotation to ensure that each sample matches the release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator, where that version annotation is modified or deleted, are reverted automatically.
The Jenkins images are part of the image payload from installation and are tagged into the image streams directly.
The Cluster Samples Operator configuration resource includes a finalizer which cleans up the following upon deletion:
- Operator managed image streams.
- Operator managed templates.
- Operator generated configuration resources.
- Cluster status resources.
Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration.
2.1.1. Cluster Samples Operator’s use of management state Copy linkLink copied to clipboard!
The Cluster Samples Operator is bootstrapped as
Managed
Managed
Certain circumstances result in the Cluster Samples Operator bootstrapping itself as
Removed
- If the Cluster Samples Operator cannot reach registry.redhat.io after three minutes on initial startup after a clean installation.
- If the Cluster Samples Operator detects it is on an IPv6 network.
However, if the Cluster Samples Operator detects that it is on an IPv6 network and an OpenShift Container Platform global proxy is configured, then IPv6 check supersedes all the checks. As a result, the Cluster Samples Operator bootstraps itself as
Removed
IPv6 installations are not currently supported by registry.redhat.io. The Cluster Samples Operator pulls most of the sample image streams and images from registry.redhat.io.
2.1.1.1. Restricted network installation Copy linkLink copied to clipboard!
Boostrapping as
Removed
registry.redhat.io
Removed
Removed
Managed
2.1.1.2. Restricted network installation with initial network access Copy linkLink copied to clipboard!
Conversely, if a cluster that is intended to be a restricted network or disconnected cluster is first installed while network access exists, the Cluster Samples Operator installs the content from
registry.redhat.io
Removed
Removed
You must put the following additional YAML file in the
openshift
openshift-install create manifest
Example Cluster Samples Operator YAML file with managementState: Removed
apiVersion: samples.operator.openshift.io/v1
kind: Config
metadata:
name: cluster
spec:
architectures:
- x86_64
managementState: Removed
2.1.2. Cluster Samples Operator’s tracking and error recovery of image stream imports Copy linkLink copied to clipboard!
After creation or update of a samples image stream, the Cluster Samples Operator monitors the progress of each image stream tag’s image import.
If an import fails, the Cluster Samples Operator retries the import through the image stream image import API, which is the same API used by the
oc import-image
skippedImagestreams
Removed
Additional resources
-
If the Cluster Samples Operator is removed during installation, you can use the Cluster Samples Operator with an alternate registry so content can be imported, and then set the Cluster Samples Operator to to get the samples.
Managed To ensure the Cluster Samples Operator bootstraps as
in a restricted network installation with initial network access to defer samples installation until you have decided which samples are desired, follow the instructions for customizing nodes to override the Cluster Samples Operator default configuration and initially come up asRemoved.Removed- To host samples in your disconnected environment, follow the instructions for using the Cluster Samples Operator with an alternate registry.
2.1.3. Cluster Samples Operator assistance for mirroring Copy linkLink copied to clipboard!
During installation, OpenShift Container Platform creates a config map named
imagestreamtag-to-image
openshift-cluster-samples-operator
imagestreamtag-to-image
The format of the key for each entry in the data field in the config map is
<image_stream_name>_<image_stream_tag_name>
During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to
Removed
Managed
The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
-
While the Cluster Samples Operator is set to , you can create your mirrored registry, or determine which existing mirrored registry you want to use.
Removed - Mirror the samples you want to the mirrored registry using the new config map as your guide.
-
Add any of the image streams you did not mirror to the list of the Cluster Samples Operator configuration object.
skippedImagestreams -
Set of the Cluster Samples Operator configuration object to the mirrored registry.
samplesRegistry -
Then set the Cluster Samples Operator to to install the image streams you have mirrored.
Managed
See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.
2.2. Cluster Samples Operator configuration parameters Copy linkLink copied to clipboard!
The samples resource offers the following configuration fields:
| Parameter | Description |
|---|---|
|
|
|
|
| Allows you to specify which registry is accessed by image streams for their image content.
Note Creation or update of RHEL content does not commence if the secret for pull access is not in place when either
Creation or update of RHEL content is not gated by the existence of the pull secret if the
|
|
| Placeholder to choose an architecture type. |
|
| Image streams that are in the Cluster Samples Operator’s inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example,
|
|
| Templates that are in the Cluster Samples Operator’s inventory, but that the cluster administrator wants the Operator to ignore or not manage. |
Secret, image stream, and template watch events can come in before the initial samples resource object is created, the Cluster Samples Operator detects and re-queues the event.
2.2.1. Configuration restrictions Copy linkLink copied to clipboard!
When the Cluster Samples Operator starts supporting multiple architectures, the architecture list is not allowed to be changed while in the
Managed
To change the architectures values, a cluster administrator must:
-
Mark the as
Management State, saving the change.Removed -
In a subsequent change, edit the architecture and change the back to
Management State.Managed
The Cluster Samples Operator still processes secrets while in
Removed
Removed
Removed
Managed
Managed
Managed
2.2.2. Conditions Copy linkLink copied to clipboard!
The samples resource maintains the following conditions in its status:
| Condition | Description |
|---|---|
|
| Indicates the samples are created in the
|
|
|
This condition is deprecated in OpenShift Container Platform. |
|
|
|
|
| Indicator that there is a
|
|
| Indicator of which image streams had errors during the image import phase for one of their tags.
|
|
|
This condition is deprecated in OpenShift Container Platform. |
2.3. Accessing the Cluster Samples Operator configuration Copy linkLink copied to clipboard!
You can configure the Cluster Samples Operator by editing the file with the provided parameters.
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Access the Cluster Samples Operator configuration:
$ oc edit configs.samples.operator.openshift.io/cluster -o yamlThe Cluster Samples Operator configuration resembles the following example:
apiVersion: samples.operator.openshift.io/v1 kind: Config ...
2.4. Removing deprecated image stream tags from the Cluster Samples Operator Copy linkLink copied to clipboard!
The Cluster Samples Operator leaves deprecated image stream tags in an image stream because users can have deployments that use the deprecated image stream tags.
You can remove deprecated image stream tags by editing the image stream with the
oc tag
Deprecated image stream tags that the samples providers have removed from their image streams are not included on initial installations.
Prerequisites
-
You installed the CLI.
oc
Procedure
Remove deprecated image stream tags by editing the image stream with the
command.oc tag$ oc tag -d <image_stream_name:tag>Example output
Deleted tag default/<image_stream_name:tag>.
Additional resources
- For more information about configuring credentials, see Using image pull secrets.
Chapter 3. Using the Cluster Samples Operator with an alternate registry Copy linkLink copied to clipboard!
You can use the Cluster Samples Operator with an alternate registry by first creating a mirror registry.
You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet.
3.1. About the mirror registry Copy linkLink copied to clipboard!
You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, a small-scale container registry included with OpenShift Container Platform subscriptions.
You can use any container registry that supports Docker v2-2, such as Red Hat Quay, the mirror registry for Red Hat OpenShift, Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.
The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.
If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters.
When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
For mirrored registries, to view the source of pulled images, you must review the
Trying to access
crictl images
Red Hat does not test third party registries with OpenShift Container Platform.
Additional information
For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source.
3.1.1. Preparing the mirror host Copy linkLink copied to clipboard!
Before you create the mirror registry, you must prepare the mirror host.
3.1.2. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (
oc
oc
If you installed an earlier version of
oc
oc
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (
oc
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
binary in a directory that is on youroc.PATHTo check your
, execute the following command:PATH$ echo $PATH
After you install the OpenShift CLI, it is available using the
oc
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (
oc
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
binary to a directory that is on youroc.PATHTo check your
, open the command prompt and execute the following command:PATHC:\> path
After you install the OpenShift CLI, it is available using the
oc
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (
oc
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
binary to a directory on your PATH.ocTo check your
, open a terminal and execute the following command:PATH$ echo $PATH
After you install the OpenShift CLI, it is available using the
oc
$ oc <command>
3.2. Configuring credentials that allow images to be mirrored Copy linkLink copied to clipboard!
Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.
Prerequisites
- You configured a mirror registry to use in your disconnected environment.
Procedure
Complete the following steps on the installation host:
-
Download your pull secret from the Red Hat OpenShift Cluster Manager and save it to a
registry.redhat.iofile..json Generate the base64-encoded user name and password or token for your mirror registry:
$ echo -n '<user_name>:<password>' | base64 -w01 BGVtbYk3ZHAtqXs=- 1
- For
<user_name>and<password>, specify the user name and password that you configured for your registry.
Make a copy of your pull secret in JSON format:
$ cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json>1 - 1
- Specify the path to the folder to store the pull secret in and a name for the JSON file that you create.
Save the file either as
or~/.docker/config.json.$XDG_RUNTIME_DIR/containers/auth.jsonThe contents of the file resemble the following example:
{ "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "quay.io": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" } } }Edit the new file and add a section that describes your registry to it:
"auths": { "<mirror_registry>": {1 "auth": "<credentials>",2 "email": "you@example.com" } },The file resembles the following example:
{ "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "you@example.com" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "quay.io": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" } } }
3.3. Mirroring the OpenShift Container Platform image repository Copy linkLink copied to clipboard!
Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade.
Prerequisites
- Your mirror host has access to the internet.
- You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured.
- You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository.
If you use self-signed certificates that do not set a Subject Alternative Name, you must precede the
commands in this procedure withoc. If you do not set this variable, theGODEBUG=x509ignoreCN=0commands will fail with the following error:ocx509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
Procedure
Complete the following steps on the mirror host:
- Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page.
Set the required environment variables:
Export the release version:
$ OCP_RELEASE=<release_version>For
, specify the tag that corresponds to the version of OpenShift Container Platform to install, such as<release_version>.4.5.4Export the local registry name and host port:
$ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'For
, specify the registry domain name for your mirror repository, and for<local_registry_host_name>, specify the port that it serves content on.<local_registry_host_port>Export the local repository name:
$ LOCAL_REPOSITORY='<local_repository_name>'For
, specify the name of the repository to create in your registry, such as<local_repository_name>.ocp4/openshift4Export the name of the repository to mirror:
$ PRODUCT_REPO='openshift-release-dev'For a production release, you must specify
.openshift-release-devExport the path to your registry pull secret:
$ LOCAL_SECRET_JSON='<path_to_pull_secret>'For
, specify the absolute path to and file name of the pull secret for your mirror registry that you created.<path_to_pull_secret>Export the release mirror:
$ RELEASE_NAME="ocp-release"For a production release, you must specify
.ocp-releaseExport the type of architecture for your server, such as
:x86_64$ ARCHITECTURE=<server_architecture>Export the path to the directory to host the mirrored images:
$ REMOVABLE_MEDIA_PATH=<path>1 - 1
- Specify the full path, including the initial forward slash (/) character.
Mirror the version images to the mirror registry:
If your mirror host does not have internet access, take the following actions:
- Connect the removable media to a system that is connected to the internet.
Review the images and configuration manifests to mirror:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run-
Record the entire section from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add the
imageContentSourcessection to theimageContentSourcesfile during installation.install-config.yaml Mirror the images to a directory on the removable media:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}Take the media to the restricted network environment and upload the images to the local container registry.
$ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}1 - 1
- For
REMOVABLE_MEDIA_PATH, you must use the same path that you specified when you mirrored the images.
If the local container registry is connected to the mirror host, take the following actions:
Directly push the release images to the local registry by using following command:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}This command pulls the release information as a digest, and its output includes the
data that you require when you install your cluster.imageContentSourcesRecord the entire
section from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add theimageContentSourcessection to theimageContentSourcesfile during installation.install-config.yamlNoteThe image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine.
To create the installation program that is based on the content that you mirrored, extract it and pin it to the release:
If your mirror host does not have internet access, run the following command:
$ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}"If the local container registry is connected to the mirror host, run the following command:
$ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"ImportantTo ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content.
You must perform this step on a machine with an active internet connection.
If you are in a disconnected environment, use the
flag as part of must-gather and point to the payload image.--image
For clusters using installer-provisioned infrastructure, run the following command:
$ openshift-install
3.4. Using Cluster Samples Operator image streams with alternate or mirrored registries Copy linkLink copied to clipboard!
Most image streams in the
openshift
The
jenkins
jenkins-agent-maven
jenkins-agent-nodejs
Setting the
samplesRegistry
The
cli
installer
must-gather
tests
The Cluster Samples Operator must be set to
Managed
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin - Create a pull secret for your mirror registry.
Procedure
Access the images of a specific image stream to mirror, for example:
$ oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.ioMirror images from registry.redhat.io associated with any image streams you need
$ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latestCreate the cluster’s image configuration object:
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-configAdd the required trusted CAs for the mirror in the cluster’s image configuration object:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=mergeUpdate the
field in the Cluster Samples Operator configuration object to contain thesamplesRegistryportion of the mirror location defined in the mirror configuration:hostname$ oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operatorNoteThis is required because the image stream import process does not use the mirror or search mechanism at this time.
Add any image streams that are not mirrored into the
field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator toskippedImagestreamsin the Cluster Samples Operator configuration object.RemovedNoteThe Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them.
Many of the templates in the
namespace reference the image streams. So usingopenshiftto purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams.Removed
3.4.1. Cluster Samples Operator assistance for mirroring Copy linkLink copied to clipboard!
During installation, OpenShift Container Platform creates a config map named
imagestreamtag-to-image
openshift-cluster-samples-operator
imagestreamtag-to-image
The format of the key for each entry in the data field in the config map is
<image_stream_name>_<image_stream_tag_name>
During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to
Removed
Managed
The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
-
While the Cluster Samples Operator is set to , you can create your mirrored registry, or determine which existing mirrored registry you want to use.
Removed - Mirror the samples you want to the mirrored registry using the new config map as your guide.
-
Add any of the image streams you did not mirror to the list of the Cluster Samples Operator configuration object.
skippedImagestreams -
Set of the Cluster Samples Operator configuration object to the mirrored registry.
samplesRegistry -
Then set the Cluster Samples Operator to to install the image streams you have mirrored.
Managed
See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.
Chapter 4. Creating images Copy linkLink copied to clipboard!
Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with OpenShift Container Platform. After you create an image, you can push it to the internal registry.
4.1. Learning container best practices Copy linkLink copied to clipboard!
When creating container images to run on OpenShift Container Platform there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift Container Platform.
4.1.1. General container image guidelines Copy linkLink copied to clipboard!
The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Container Platform.
Reuse images
Wherever possible, base your image on an appropriate upstream image using the
FROM
In addition, use tags in the
FROM
rhel:rhel7
latest
latest
Maintain compatibility within tags
When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named
foo
1.0
foo:v1
foo:v1
If you later release an incompatible update, then switch to a new tag, for example
foo:v2
foo:latest
Avoid multiple processes
Do not start multiple services, such as a database and
SSHD
This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes.
Use exec in wrapper scripts
Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses
exec
exec
If you have a wrapper script that starts a process for some server. You start your container, for example, using
podman run -i
CTRL+C
exec
podman
exec
podman
Also note that your process runs as
PID 1
PID 1
Clean temporary files
Remove all temporary files you create during the build process. This also includes any files added with the
ADD
yum clean
yum install
You can prevent the
yum
RUN
RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y
Note that if you instead write:
RUN yum -y install mypackage
RUN yum -y install myotherpackage && yum clean all -y
Then the first
yum
yum clean
The current container build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an
rm
yum clean
In addition, performing multiple commands in a single
RUN
Place instructions in the proper order
The container builder reads the
Dockerfile
Dockerfile
For example, if you are working on a
Dockerfile
ADD
RUN
yum install
ADD
FROM foo
RUN yum -y install mypackage && yum clean all -y
ADD myfile /test/myfile
This way each time you edit
myfile
podman build
docker build
yum
ADD
If instead you wrote the
Dockerfile
FROM foo
ADD myfile /test/myfile
RUN yum -y install mypackage && yum clean all -y
Then each time you changed
myfile
podman build
docker build
ADD
RUN
yum
Mark important ports
The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a
podman run
Dockerfile
-
Exposed ports show up under associated with containers created from your image.
podman ps -
Exposed ports are present in the metadata for your image returned by .
podman inspect - Exposed ports are linked when you link one container to another.
Set environment variables
It is good practice to set environment variables with the
ENV
Dockerfile
JAVA_HOME
Avoid default passwords
Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead.
If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set.
Avoid sshd
It is best to avoid running
sshd
podman exec
docker exec
oc exec
oc rsh
sshd
Use volumes for persistent data
Images use a volume for persistent data. This way OpenShift Container Platform mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved.
All data that needs to be preserved even after the container is destroyed must be written to a volume. Container engines support a
readonly
Explicitly defining volumes in your
Dockerfile
See the Kubernetes documentation for more information on how volumes are used in OpenShift Container Platform.
Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster.
4.1.2. OpenShift Container Platform-specific guidelines Copy linkLink copied to clipboard!
The following are guidelines that apply when creating container images specifically for use on OpenShift Container Platform.
Enable images for source-to-image (S2I)
For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (S2I) build tool. S2I is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.
Support arbitrary user ids
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node.
For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write these files.
Care must be taken when altering the directories and file permissions of sensitive areas of a container, which is no different than to a normal system.
If applied to sensitive areas, such as
/etc/passwd
/etc/passwd
In addition, the processes running in the container must not listen on privileged ports, ports below 1024, since they are not running as a privileged user.
If your S2I image does not include a
USER
0
system:serviceaccount:<your-project>:builder
anyuid
Use services for inter-image communication
For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image consumes an OpenShift Container Platform service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests.
Provide common libraries
For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met.
Use environment variables for configuration
Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file.
It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry.
Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image.
For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present.
This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources are defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift Container Platform environment without modifying the application image.
In addition, tuning is done by inspecting the
cgroups
cgroup
Set image metadata
Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that are needed.
Clustering
You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information to perform leader election or failover state; for example, in session replication.
Consider how your instances accomplish this communication when running in OpenShift Container Platform. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic.
Logging
It is best to send all logging to standard out. OpenShift Container Platform collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages.
If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file.
Liveness and readiness probes
Document example liveness and readiness probes that can be used with your image. These probes allow users to deploy your image with confidence that traffic is not be routed to the container until it is prepared to handle it, and that the container is restarted if the process gets into an unhealthy state.
Templates
Consider providing an example template with your image. A template gives users an easy way to quickly get your image deployed with a working configuration. Your template must include the liveness and readiness probes you documented with the image, for completeness.
4.2. Including metadata in images Copy linkLink copied to clipboard!
Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed.
This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future.
4.2.1. Defining image metadata Copy linkLink copied to clipboard!
You can use the
LABEL
Dockerfile
Docker documentation for more information on the
LABEL
The label names are typically namespaced. The namespace is set accordingly to reflect the project that is going to pick up the labels and use them. For OpenShift Container Platform the namespace is set to
io.openshift
io.k8s
See the Docker custom metadata documentation for details about the format.
| Variable | Description |
|---|---|
|
| This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process.
|
|
| Specifies a list of tags that the generation tools and the UI uses to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants
|
|
| This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users.
|
|
| An image can use this variable to suggest that it does not support scaling. The UI then communicates this to consumers of that image. Being not-scalable means that the value of
|
|
| This label suggests how much resources the container image needs to work properly. The UI can warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity.
|
4.3. Creating images from source code with source-to-image Copy linkLink copied to clipboard!
Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.
The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts.
4.3.1. Understanding the source-to-image build process Copy linkLink copied to clipboard!
The build process consists of the following three fundamental elements, which are combined into a final container image:
- Sources
- Source-to-image (S2I) scripts
- Builder image
S2I generates a Dockerfile with the builder image as the first
FROM
4.3.2. How to write source-to-image scripts Copy linkLink copied to clipboard!
You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing
assemble
run
save-artifacts
- A script specified in the build configuration.
-
A script found in the application source directory.
.s2i/bin -
A script found at the default image URL with the label.
io.openshift.s2i.scripts-url
Both the
io.openshift.s2i.scripts-url
-
: absolute path inside the image to a directory where the S2I scripts are located.
image:///path_to_scripts_dir -
: relative or absolute path to a directory on the host where the S2I scripts are located.
file:///path_to_scripts_dir -
: URL to a directory where the S2I scripts are located.
http(s)://path_to_scripts_dir
| Script | Description |
|---|---|
|
| The
|
|
| The
|
|
| The
These dependencies are gathered into a
|
|
| The
|
|
| The
Note The suggested location to put the test application built by your
|
Example S2I scripts
The following example S2I scripts are written in Bash. Each example assumes its
tar
/tmp/s2i
assemble script:
#!/bin/bash
# restore build artifacts
if [ "$(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then
mv /tmp/s2i/artifacts/* $HOME/.
fi
# move the application source
mv /tmp/s2i/src $HOME/src
# build application artifacts
pushd ${HOME}
make all
# install the artifacts
make install
popd
run script:
#!/bin/bash
# run the application
/opt/application/run.sh
save-artifacts script:
#!/bin/bash
pushd ${HOME}
if [ -d deps ]; then
# all deps contents to tar stream
tar cf - deps
fi
popd
usage script:
#!/bin/bash
# inform the user how to use the image
cat <<EOF
This is a S2I sample builder image, to use it, install
https://github.com/openshift/source-to-image
EOF
4.4. About testing source-to-image images Copy linkLink copied to clipboard!
As an Source-to-Image (S2I) builder image author, you can test your S2I image locally and use the OpenShift Container Platform build system for automated testing and continuous integration.
S2I requires the
assemble
run
save-artifacts
usage
The goal of testing an S2I image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated.
4.4.1. Understanding testing requirements Copy linkLink copied to clipboard!
The standard location for the
test
test/run
The
test/run
$PATH
S2I combines the application source code and builder image, so to test it you need a sample application source to verify that the source successfully transforms into a runnable container image. The sample application should be simple, but it should exercise the crucial steps of
assemble
run
4.4.2. Generating scripts and tools Copy linkLink copied to clipboard!
The S2I tooling comes with powerful generation tools to speed up the process of creating a new S2I image. The
s2i create
Makefile
$ s2i create _<image name>_ _<destination directory>_
The generated
test/run
The
test/run
s2i create
test/test-app
4.4.3. Testing locally Copy linkLink copied to clipboard!
The easiest way to run the S2I image tests locally is to use the generated
Makefile
If you did not use the
s2i create
Makefile
IMAGE_NAME
Sample Makefile
IMAGE_NAME = openshift/ruby-20-centos7
CONTAINER_ENGINE := $(shell command -v podman 2> /dev/null | echo docker)
build:
${CONTAINER_ENGINE} build -t $(IMAGE_NAME) .
.PHONY: test
test:
${CONTAINER_ENGINE} build -t $(IMAGE_NAME)-candidate .
IMAGE_NAME=$(IMAGE_NAME)-candidate test/run
4.4.4. Basic testing workflow Copy linkLink copied to clipboard!
The
test
If you use Podman, run the following command:
$ podman build -t <builder_image_name>If you use Docker, run the following command:
$ docker build -t <builder_image_name>
The following steps describe the default workflow to test S2I image builders:
Verify the
script is working:usageIf you use Podman, run the following command:
$ podman run <builder_image_name> .If you use Docker, run the following command:
$ docker run <builder_image_name> .
Build the image:
$ s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_-
Optional: if you support , run step 2 once again to verify that saving and restoring artifacts works properly.
save-artifacts Run the container:
If you use Podman, run the following command:
$ podman run <output_application_image_name>If you use Docker, run the following command:
$ docker run <output_application_image_name>
- Verify the container is running and the application is responding.
Running these steps is generally enough to tell if the builder image is working as expected.
4.4.5. Using OpenShift Container Platform for building the image Copy linkLink copied to clipboard!
Once you have a
Dockerfile
If your OpenShift Container Platform instance is hosted on a public IP address, the build can be triggered each time you push into your S2I builder image GitHub repository.
You can also use the
ImageChangeTrigger
Chapter 5. Managing images Copy linkLink copied to clipboard!
5.1. Managing images overview Copy linkLink copied to clipboard!
With OpenShift Container Platform you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave.
5.1.1. Images overview Copy linkLink copied to clipboard!
An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository.
By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively.
5.2. Tagging images Copy linkLink copied to clipboard!
The following sections provide an overview and instructions for using image tags in the context of container images for working with OpenShift Container Platform image streams and their tags.
5.2.1. Image tags Copy linkLink copied to clipboard!
An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here
:v3.11.59-2
registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2
You can add additional tags to an image. For example, an image might be assigned the tags
:v3.11.59-2
:latest
OpenShift Container Platform provides the
oc tag
docker tag
5.2.2. Image tag conventions Copy linkLink copied to clipboard!
Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built.
If there is too much information embedded in a tag name, like
v2.0.1-may-2019
If the tag is named
v2.0
Although tag naming convention is up to you, here are a few examples in the format
<image_name>:<image_tag>
| Description | Example |
|---|---|
| Revision |
|
| Architecture |
|
| Base image |
|
| Latest (potentially unstable) |
|
| Latest stable |
|
If you require dates in tag names, periodically inspect old and unsupported images and
istags
5.2.3. Adding tags to image streams Copy linkLink copied to clipboard!
An image stream in OpenShift Container Platform comprises zero or more container images identified by tags.
There are different types of tags available. The default behavior uses a
permanent
permanent
A
tracking
Procedure
You can add tags to an image stream using the
command:oc tag$ oc tag <source> <destination>For example, to configure the
image streamrubytag to always refer to the current image for thestatic-2.0image streamrubytag:2.0$ oc tag ruby:2.0 ruby:static-2.0This creates a new image stream tag named
in thestatic-2.0image stream. The new tag directly references the image id that therubyimage stream tag pointed to at the timeruby:2.0was run, and the image it points to never changes.oc tagTo ensure the destination tag is updated when the source tag changes, use the
flag:--alias=true$ oc tag --alias=true <source> <destination>
Use a tracking tag for creating permanent aliases, for example,
latest
stable
-
You can also add the flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level.
--scheduled=true The
flag creates an image stream tag that is not imported. The tag points to the source location, permanently.--referenceIf you want to instruct OpenShift Container Platform to always fetch the tagged image from the integrated registry, use
. The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the next time they are needed. The flag also allows for pulling from insecure registries without a need to supply--reference-policy=localto the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy.--insecure-registry
5.2.4. Removing tags from image streams Copy linkLink copied to clipboard!
You can remove tags from an image stream.
Procedure
To remove a tag completely from an image stream run:
$ oc delete istag/ruby:latestor:
$ oc tag -d ruby:latest
5.2.5. Referencing images in imagestreams Copy linkLink copied to clipboard!
You can use tags to reference images in image streams using the following reference types.
| Reference type | Description |
|---|---|
|
| An
|
|
| An
|
|
| A
|
When viewing example image stream definitions you may notice they contain definitions of
ImageStreamTag
DockerImage
ImageStreamImage
This is because the
ImageStreamImage
ImageStreamImage
Procedure
To reference an image for a given image stream and tag, use
:ImageStreamTag<image_stream_name>:<tag>To reference an image for a given image stream and image
ID, usesha:ImageStreamImage<image_stream_name>@<id>The
is an immutable identifier for a specific image, also called a digest.<id>To reference or retrieve an image for a given external registry, use
:DockerImageopenshift/ruby-20-centos7:2.0NoteWhen no tag is specified, it is assumed the
tag is used.latestYou can also reference a third-party registry:
registry.redhat.io/rhel7:latestOr an image with a digest:
centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e
5.3. Image pull policy Copy linkLink copied to clipboard!
Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod.
5.3.1. Image pull policy overview Copy linkLink copied to clipboard!
When OpenShift Container Platform creates containers, it uses the container
imagePullPolicy
imagePullPolicy
| Value | Description |
|---|---|
|
| Always pull the image. |
|
| Only pull the image if it does not already exist on the node. |
|
| Never pull the image. |
If a container
imagePullPolicy
-
If the tag is , OpenShift Container Platform defaults
latesttoimagePullPolicy.Always -
Otherwise, OpenShift Container Platform defaults to
imagePullPolicy.IfNotPresent
5.4. Using image pull secrets Copy linkLink copied to clipboard!
If you are using the OpenShift Container Platform internal registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required.
However, for other scenarios, such as referencing images across OpenShift Container Platform projects or from secured registries, then additional configuration steps are required.
You can obtain the image pull secret from the Red Hat OpenShift Cluster Manager. This pull secret is called
pullSecret
You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io, which serve the container images for OpenShift Container Platform components.
Example config.json file
{
"auths":{
"cloud.openshift.com":{
"auth":"b3Blb=",
"email":"you@example.com"
},
"quay.io":{
"auth":"b3Blb=",
"email":"you@example.com"
}
}
}
5.4.1. Allowing pods to reference images across projects Copy linkLink copied to clipboard!
When using the internal registry, to allow pods in
project-a
project-b
project-a
system:image-puller
project-b
When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift Container Platform internal registry.
Procedure
To allow pods in
to reference images inproject-a, bind a service account inproject-bto theproject-arole insystem:image-puller:project-b$ oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project-bAfter adding that role, the pods in
that reference the default service account are able to pull images fromproject-a.project-bTo allow access for any service account in
, use the group:project-a$ oc policy add-role-to-group \ system:image-puller system:serviceaccounts:project-a \ --namespace=project-b
5.4.2. Allowing pods to reference images from other secured registries Copy linkLink copied to clipboard!
The
.dockercfg
$HOME/.docker/config.json
To pull a secured container image that is not from OpenShift Container Platform’s internal registry, you must create a pull secret from your Docker credentials and add it to your service account.
Procedure
If you already have a
file for the secured registry, you can create a secret from that file by running:.dockercfg$ oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<path/to/.dockercfg> \ --type=kubernetes.io/dockercfgOr if you have a
file:$HOME/.docker/config.json$ oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjsonIf you do not already have a Docker credentials file for the secured registry, you can create a secret by running:
$ oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email>To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is
:default$ oc secrets link default <pull_secret_name> --for=pull
5.4.2.1. Pulling from private registries with delegated authentication Copy linkLink copied to clipboard!
A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints.
Procedure
Create a secret for the delegated authentication server:
$ oc create secret docker-registry \ --docker-server=sso.redhat.com \ --docker-username=developer@example.com \ --docker-password=******** \ --docker-email=unused \ redhat-connect-sso secret/redhat-connect-ssoCreate a secret for the private registry:
$ oc create secret docker-registry \ --docker-server=privateregistry.example.com \ --docker-username=developer@example.com \ --docker-password=******** \ --docker-email=unused \ private-registry secret/private-registry
5.4.3. Updating the global cluster pull secret Copy linkLink copied to clipboard!
You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret.
To transfer your cluster to another owner, you must first initiate the transfer in OpenShift Cluster Manager, and then update the pull secret on the cluster. Updating a cluster’s pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager.
For more information about transferring cluster ownership, see "Transferring cluster ownership" in the Red Hat OpenShift Cluster Manager documentation.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin
Procedure
Optional: To append a new pull secret to the existing pull secret, complete the following steps:
Enter the following command to download the pull secret:
$ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location>1 - 1
- Provide the path to the pull secret file.
Enter the following command to add the new pull secret:
$ oc registry login --registry="<registry>" \1 --auth-basic="<username>:<password>" \2 --to=<pull_secret_location>3 Alternatively, you can perform a manual update to the pull secret file.
Enter the following command to update the global pull secret for your cluster:
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location>1 - 1
- Provide the path to the new pull secret file.
This update is rolled out to all nodes, which can take some time depending on the size of your cluster.
NoteAs of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.
Chapter 6. Managing image streams Copy linkLink copied to clipboard!
Image streams provide a means of creating and updating container images in an on-going way. As improvements are made to an image, tags can be used to assign new version numbers and keep track of changes. This document describes how image streams are managed.
6.1. Why use imagestreams Copy linkLink copied to clipboard!
An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes.
Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository.
You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively.
For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image.
However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the previous, presumably known good image.
The source images can be stored in any of the following:
- OpenShift Container Platform’s integrated registry.
- An external registry, for example registry.redhat.io or Quay.io.
- Other image streams in the OpenShift Container Platform cluster.
When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image.
The image stream metadata is stored in the etcd instance along with other cluster information.
Using image streams has several significant benefits:
- You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line.
- You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects.
- You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration.
- You can share images using fine-grained access control and quickly distribute images across your teams.
- If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly.
- You can configure security around who can view and use the images through permissions on the image stream objects.
- Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams.
6.2. Configuring image streams Copy linkLink copied to clipboard!
An
ImageStream
Imagestream object definition
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
labels:
app: ruby-sample-build
template: application-template-stibuild
name: origin-ruby-sample
namespace: test
spec: {}
status:
dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample
tags:
- items:
- created: 2017-09-02T10:15:09Z
dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
generation: 2
image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
- created: 2017-09-01T13:40:11Z
dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
generation: 1
image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
tag: latest
- 1
- The name of the image stream.
- 2
- Docker repository path where new images can be pushed to add or update them in this image stream.
- 3
- The SHA identifier that this image stream tag currently references. Resources that reference this image stream tag use this identifier.
- 4
- The SHA identifier that this image stream tag previously referenced. Can be used to rollback to an older image.
- 5
- The image stream tag name.
6.3. Image stream images Copy linkLink copied to clipboard!
An image stream image points from within an image stream to a particular image ID.
Image stream images allow you to retrieve metadata about an image from a particular image stream where it is tagged.
Image stream image objects are automatically created in OpenShift Container Platform whenever you import or tag an image into the image stream. You should never have to explicitly define an image stream image object in any image stream definition that you use to create image streams.
The image stream image consists of the image stream name and image ID from the repository, delimited by an
@
<image-stream-name>@<image-id>
To refer to the image in the
ImageStream
origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
6.4. Image stream tags Copy linkLink copied to clipboard!
An image stream tag is a named pointer to an image in an image stream. It is abbreviated as
istag
Image stream tags can reference any local or externally managed image. It contains a history of images represented as a stack of all images the tag ever pointed to. Whenever a new or existing image is tagged under particular image stream tag, it is placed at the first position in the history stack. The image previously occupying the top position is available at the second position. This allows for easy rollbacks to make tags point to historical images again.
The following image stream tag is from an
ImageStream
Image stream tag with two images in its history
tags:
- items:
- created: 2017-09-02T10:15:09Z
dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
generation: 2
image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
- created: 2017-09-01T13:40:11Z
dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5
generation: 1
image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
tag: latest
Image stream tags can be permanent tags or tracking tags.
- Permanent tags are version-specific tags that point to a particular version of an image, such as Python 3.5.
Tracking tags are reference tags that follow another image stream tag and can be updated to change which image they follow, like a symlink. These new levels are not guaranteed to be backwards-compatible.
For example, the
image stream tags that ship with OpenShift Container Platform are tracking tags. This means consumers of thelatestimage stream tag are updated to the newest level of the framework provided by the image when a new level becomes available. Alatestimage stream tag tolatestcan be changed tov3.10at any time. It is important to be aware that thesev3.11image stream tags behave differently than the Dockerlatesttag. Thelatestimage stream tag, in this case, does not point to the latest image in the Docker repository. It points to another image stream tag, which might not be the latest version of an image. For example, if thelatestimage stream tag points tolatestof an image, when thev3.10version is released, the3.11tag is not automatically updated tolatest, and remains atv3.11until it is manually updated to point to av3.10image stream tag.v3.11NoteTracking tags are limited to a single image stream and cannot reference other image streams.
You can create your own image stream tags for your own needs.
The image stream tag is composed of the name of the image stream and a tag, separated by a colon:
<imagestream name>:<tag>
For example, to refer to the
sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
ImageStream
origin-ruby-sample:latest
6.5. Image stream change triggers Copy linkLink copied to clipboard!
Image stream triggers allow your builds and deployments to be automatically invoked when a new version of an upstream image is available.
For example, builds and deployments can be automatically started when an image stream tag is modified. This is achieved by monitoring that particular image stream tag and notifying the build or deployment when a change is detected.
6.6. Image stream mapping Copy linkLink copied to clipboard!
When the integrated registry receives a new image, it creates and sends an image stream mapping to OpenShift Container Platform, providing the image’s project, name, tag, and image metadata.
Configuring image stream mappings is an advanced feature.
This information is used to create a new image, if it does not already exist, and to tag the image into the image stream. OpenShift Container Platform stores complete metadata about each image, such as commands, entry point, and environment variables. Images in OpenShift Container Platform are immutable and the maximum name length is 63 characters.
The following image stream mapping example results in an image being tagged as
test/origin-ruby-sample:latest
Image stream mapping object definition
apiVersion: image.openshift.io/v1
kind: ImageStreamMapping
metadata:
creationTimestamp: null
name: origin-ruby-sample
namespace: test
tag: latest
image:
dockerImageLayers:
- name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
size: 0
- name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29
size: 196634330
- name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
size: 0
- name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
size: 0
- name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092
size: 177723024
- name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e
size: 55679776
- name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966
size: 11939149
dockerImageMetadata:
Architecture: amd64
Config:
Cmd:
- /usr/libexec/s2i/run
Entrypoint:
- container-entrypoint
Env:
- RACK_ENV=production
- OPENSHIFT_BUILD_NAMESPACE=test
- OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git
- EXAMPLE=sample-app
- OPENSHIFT_BUILD_NAME=ruby-sample-build-1
- PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- STI_SCRIPTS_URL=image:///usr/libexec/s2i
- STI_SCRIPTS_PATH=/usr/libexec/s2i
- HOME=/opt/app-root/src
- BASH_ENV=/opt/app-root/etc/scl_enable
- ENV=/opt/app-root/etc/scl_enable
- PROMPT_COMMAND=. /opt/app-root/etc/scl_enable
- RUBY_VERSION=2.2
ExposedPorts:
8080/tcp: {}
Labels:
build-date: 2015-12-23
io.k8s.description: Platform for building and running Ruby 2.2 applications
io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest
io.openshift.build.commit.author: Ben Parees <bparees@users.noreply.github.com>
io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500
io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442
io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti'
io.openshift.build.commit.ref: master
io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e
io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git
io.openshift.builder-base-version: 8d95148
io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df
io.openshift.s2i.scripts-url: image:///usr/libexec/s2i
io.openshift.tags: builder,ruby,ruby22
io.s2i.scripts-url: image:///usr/libexec/s2i
license: GPLv2
name: CentOS Base Image
vendor: CentOS
User: "1001"
WorkingDir: /opt/app-root/src
Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76
ContainerConfig:
AttachStdout: true
Cmd:
- /bin/sh
- -c
- tar -C /tmp -xf - && /usr/libexec/s2i/assemble
Entrypoint:
- container-entrypoint
Env:
- RACK_ENV=production
- OPENSHIFT_BUILD_NAME=ruby-sample-build-1
- OPENSHIFT_BUILD_NAMESPACE=test
- OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git
- EXAMPLE=sample-app
- PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- STI_SCRIPTS_URL=image:///usr/libexec/s2i
- STI_SCRIPTS_PATH=/usr/libexec/s2i
- HOME=/opt/app-root/src
- BASH_ENV=/opt/app-root/etc/scl_enable
- ENV=/opt/app-root/etc/scl_enable
- PROMPT_COMMAND=. /opt/app-root/etc/scl_enable
- RUBY_VERSION=2.2
ExposedPorts:
8080/tcp: {}
Hostname: ruby-sample-build-1-build
Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e
OpenStdin: true
StdinOnce: true
User: "1001"
WorkingDir: /opt/app-root/src
Created: 2016-01-29T13:40:00Z
DockerVersion: 1.8.2.fc21
Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43
Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd
Size: 441976279
apiVersion: "1.0"
kind: DockerImage
dockerImageMetadataVersion: "1.0"
dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d
6.7. Working with image streams Copy linkLink copied to clipboard!
The following sections describe how to use image streams and image stream tags.
6.7.1. Getting information about image streams Copy linkLink copied to clipboard!
You can get general information about the image stream and detailed information about all the tags it is pointing to.
Procedure
Get general information about the image stream and detailed information about all the tags it is pointing to:
$ oc describe is/<image-name>For example:
$ oc describe is/pythonExample output
Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute agoGet all the information available about particular image stream tag:
$ oc describe istag/<image-stream>:<tag-name>For example:
$ oc describe istag/python:latestExample output
Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c $STI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801
More information is output than shown.
6.7.2. Adding tags to an image stream Copy linkLink copied to clipboard!
You can add additional tags to image streams.
Procedure
Add a tag that points to one of the existing tags by using the `oc tag`command:
$ oc tag <image-name:tag1> <image-name:tag2>For example:
$ oc tag python:3.5 python:latestExample output
Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.Confirm the image stream has two tags, one,
, pointing at the external container image and another tag,3.5, pointing to the same image because it was created based on the first tag.latest$ oc describe is/pythonExample output
Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago
6.7.3. Adding tags for an external image Copy linkLink copied to clipboard!
You can add tags for external images.
Procedure
Add tags pointing to internal or external images, by using the
command for all tag-related operations:oc tag$ oc tag <repository/image> <image-name:tag>For example, this command maps the
image to thedocker.io/python:3.6.0tag in the3.6image stream.python$ oc tag docker.io/python:3.6.0 python:3.6Example output
Tag python:3.6 set to docker.io/python:3.6.0.If the external image is secured, you must create a secret with credentials for accessing that registry.
6.7.4. Updating image stream tags Copy linkLink copied to clipboard!
You can update a tag to reflect another tag in an image stream.
Procedure
Update a tag:
$ oc tag <image-name:tag> <image-name:latest>For example, the following updates the
tag to reflect thelatesttag in an image stream:3.6$ oc tag python:3.6 python:latestExample output
Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.
6.7.5. Removing image stream tags Copy linkLink copied to clipboard!
You can remove old tags from an image stream.
Procedure
Remove old tags from an image stream:
$ oc tag -d <image-name:tag>For example:
$ oc tag -d python:3.5Example output
Deleted tag default/python:3.5.
See Removing deprecated image stream tags from the Cluster Samples Operator for more information on how the Cluster Samples Operator handles deprecated image stream tags.
6.7.6. Configuring periodic importing of image stream tags Copy linkLink copied to clipboard!
When working with an external container image registry, to periodically re-import an image, for example to get latest security updates, you can use the
--scheduled
Procedure
Schedule importing images:
$ oc tag <repository/image> <image-name:tag> --scheduledFor example:
$ oc tag docker.io/python:3.6.0 python:3.6 --scheduledExample output
Tag python:3.6 set to import docker.io/python:3.6.0 periodically.This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default.
Remove the periodic check, re-run above command but omit the
flag. This will reset its behavior to default.--scheduled$ oc tag <repositiory/image> <image-name:tag>
6.8. Importing images and image streams from private registries Copy linkLink copied to clipboard!
An image stream can be configured to import tag and image metadata from private image registries requiring authentication. This procedures applies if you change the registry that the Cluster Samples Operator uses to pull content from to something other than registry.redhat.io.
When importing from insecure or secure registries, the registry URL defined in the secret must include the
:80
Procedure
You must create a
object that is used to store your credentials by entering the following command:secret$ oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjsonAfter the secret is configured, create the new image stream or enter the
command:oc import-image$ oc import-image <imagestreamtag> --from=
<privileged>false</privileged>
<alwaysPullImage>true</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
</containers>
<envVars/>
<annotations/>
<imagePullSecrets/>
<nodeProperties/>
</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
If you log in to the Jenkins console and make further changes to the pod template configuration after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the config map has changed, it will replace the pod template and overwrite those configuration changes. You cannot merge a new configuration with the existing configuration.
Do not log in to the Jenkins console and modify the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration.
Consider the config map approach if you have more complex configuration needs.
After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin.
The following rules apply:
-
Removing the label or annotation from the config map, image stream, or image stream tag results in the deletion of any existing from the configuration of the Kubernetes plugin.
PodTemplate - If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin.
-
Either creating appropriately labeled or annotated ,
ConfigMap, orImageStreamobjects, or the adding of labels after their initial creation, leads to creating of aImageStreamTagin the Kubernetes-plugin configuration.PodTemplate -
In the case of the by config map form, changes to the config map data for the
PodTemplateare applied to thePodTemplatesettings in the Kubernetes plugin configuration and overrides any changes that were made to thePodTemplatethrough the Jenkins UI between changes to the config map.PodTemplate
To use a container image as a Jenkins agent, the image must run the agent as an entrypoint. For more details about this, refer to the official Jenkins documentation.
12.2.7. Jenkins permissions Copy linkLink copied to clipboard!
If in the config map the
<serviceAccount>
Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image.
If you use the example template for Jenkins that is provided by OpenShift Container Platform, the
jenkins
edit
The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master.
- Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account.
-
For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template.
podTemplate -
If you do not specify a value for the service account, the service account is used.
default - Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod.
12.2.8. Creating a Jenkins service from a template Copy linkLink copied to clipboard!
Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default
openshift
The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether or not the Jenkins content persists across a pod restart.
A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment.
-
uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing.
jenkins-ephemeral -
uses a Persistent Volume (PV) store. Data survives a pod restart.
jenkins-persistent
To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment.
After you select which template you want, you must instantiate the template to be able to use Jenkins.
Procedure
Create a new Jenkins application using one of the following methods:
A PV:
$ oc new-app jenkins-persistentOr an
type volume where configuration does not persist across pod restarts:emptyDir$ oc new-app jenkins-ephemeral
12.2.9. Using the Jenkins Kubernetes plugin Copy linkLink copied to clipboard!
In the following example, the
openshift-jee-sample
BuildConfig
BuildConfig
openshift-jee-sample-docker
BuildConfig
Sample BuildConfig that uses the Jenkins Kubernetes plugin
kind: List
apiVersion: v1
items:
- kind: ImageStream
apiVersion: image.openshift.io/v1
metadata:
name: openshift-jee-sample
- kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
name: openshift-jee-sample-docker
spec:
strategy:
type: Docker
source:
type: Docker
dockerfile: |-
FROM openshift/wildfly-101-centos7:latest
COPY ROOT.war /wildfly/standalone/deployments/ROOT.war
CMD $STI_SCRIPTS_PATH/run
binary:
asFile: ROOT.war
output:
to:
kind: ImageStreamTag
name: openshift-jee-sample:latest
- kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
name: openshift-jee-sample
spec:
strategy:
type: JenkinsPipeline
jenkinsPipelineStrategy:
jenkinsfile: |-
node("maven") {
sh "git clone https://github.com/openshift/openshift-jee-sample.git ."
sh "mvn -B -Popenshift package"
sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war"
}
triggers:
- type: ConfigChange
It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the previous example, which overrides the container memory and specifies an environment variable.
Sample BuildConfig that uses the Jenkins Kubernetes Plugin, specifying memory limit and environment variable
kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
name: openshift-jee-sample
spec:
strategy:
type: JenkinsPipeline
jenkinsPipelineStrategy:
jenkinsfile: |-
podTemplate(label: "mypod",
cloud: "openshift",
inheritFrom: "maven",
containers: [
containerTemplate(name: "jnlp",
image: "openshift/jenkins-agent-maven-35-centos7:v3.10",
resourceRequestMemory: "512Mi",
resourceLimitMemory: "512Mi",
envVars: [
envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25")
])
]) {
node("mypod") {
sh "git clone https://github.com/openshift/openshift-jee-sample.git ."
sh "mvn -B -Popenshift package"
sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war"
}
}
triggers:
- type: ConfigChange
- 1
- A new pod template called
mypodis defined dynamically. The new pod template name is referenced in the node stanza. - 2
- The
cloudvalue must be set toopenshift. - 3
- The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform.
- 4
- This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name
jnlp. - 5
- Specify the container image name again. This is a known issue.
- 6
- A memory request of
512 Miis specified. - 7
- A memory limit of
512 Miis specified. - 8
- An environment variable
CONTAINER_HEAP_PERCENT, with value0.25, is specified. - 9
- The node stanza references the name of the defined pod template.
By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile.
12.2.10. Jenkins memory requirements Copy linkLink copied to clipboard!
When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is
1 Gi
By default, all other process that run in the Jenkins container cannot use more than a total of
512 MiB
And if
Project
It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis.
You can increase the amount of memory available to Jenkins by overriding the
MEMORY_LIMIT
12.3. Jenkins agent Copy linkLink copied to clipboard!
OpenShift Container Platform provides three images that are suitable for use as Jenkins agents: the
Base
Maven
Node.js
The first is a base image for Jenkins agents:
-
It pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones including ,
git,tar, andzipamong others.nss - It establishes the JNLP agent as the entrypoint.
-
It includes the client tooling for invoking command line operations from within Jenkins jobs.
oc -
It provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and images.
localdev
Two more images that extend the base image are also provided:
- Maven v3.5 image
- Node.js v10 image and Node.js v12 image
The Maven and Node.js Jenkins agent images provide Dockerfiles for the Universal Base Image (UBI) that you can reference when building new agent images. Also note the
contrib
contrib/bin
Use and extend an appropriate agent image version for the your of OpenShift Container Platform. If the
oc
12.3.1. Jenkins agent images Copy linkLink copied to clipboard!
The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io.
Jenkins images are available through the Red Hat Registry:
$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>
To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry.
12.3.2. Jenkins agent environment variables Copy linkLink copied to clipboard!
Each Jenkins agent container can be configured with the following environment variables.
| Variable | Definition | Example values and settings |
|---|---|---|
|
| These values control the maximum heap size of the Jenkins JVM. If
By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. |
|
|
| These values control the initial heap size of the Jenkins JVM. If
By default, the JVM sets the initial heap size. |
|
|
| If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. | Example setting:
|
|
| Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. | Default:
|
|
| Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. | Default:
|
|
| Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. | Example settings:
|
|
| Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed:
| The default value is
Example setting:
|
12.3.3. Jenkins agent memory requirements Copy linkLink copied to clipboard!
A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as
javac
By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the
CONTAINER_HEAP_PERCENT
By default, any other processes run in the Jenkins agent container, such as shell scripts or
oc
By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads.
12.3.4. Jenkins agent Gradle builds Copy linkLink copied to clipboard!
Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified.
The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required.
-
Ensure the long-lived Gradle daemon is disabled by adding to the
org.gradle.daemon=falsefile.gradle.properties -
Disable parallel build execution by ensuring is not set in the
org.gradle.parallel=truefile and thatgradle.propertiesis not set as a command line argument.--parallel -
To prevent Java compilations running out-of-process, set in the
java { options.fork = false }file.build.gradle -
Disable multiple additional test processes by ensuring is set in the
test { maxParallelForks = 1 }file.build.gradle -
Override the Gradle JVM memory parameters by the ,
GRADLE_OPTSorJAVA_OPTSenvironment variables.JAVA_TOOL_OPTIONS -
Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the and
maxHeapSizesettings injvmArgs, or through thebuild.gradlecommand line argument.-Dorg.gradle.jvmargs
12.3.5. Jenkins agent pod retention Copy linkLink copied to clipboard!
Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported:
-
keeps the build pod regardless of build result.
Always -
uses the plugin value, which is the pod template only.
Default -
always deletes the pod.
Never -
keeps the pod if it fails during the build.
On Failure
You can override pod retention in the pipeline Jenkinsfile:
podTemplate(label: "mypod",
cloud: "openshift",
inheritFrom: "maven",
podRetention: onFailure(),
containers: [
...
]) {
node("mypod") {
...
}
}
- 1
- Allowed values for
podRetentionarenever(),onFailure(),always(), anddefault().
Pods that are kept might continue to run and count against resource quotas.
12.4. Source-to-image Copy linkLink copied to clipboard!
You can use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. You can use the Red Hat Java Source-to-Image for OpenShift documentation as a reference for runtime environments that use Java. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code.
S2I images include:
- .NET
- Java
- Go
- Node.js
- Perl
- PHP
- Python
- Ruby
S2I images are available for you to use directly from the OpenShift Container Platform web console by following procedure:
- Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective.
- Use the perspective switcher to switch to the Developer perspective.
- In the +Add view, select an existing project from the list or use the Project drop-down list to create a new project.
- Choose All services under the tile Developer Catalog.
- Select the Type Builder Images then can see the available S2I images.
S2I images are also available though the Configuring the Cluster Samples Operator.
12.4.1. Source-to-image build process overview Copy linkLink copied to clipboard!
Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. It performs the following steps:
-
Runs the command
FROM <builder image> - Copies the source code to a defined location in the builder image
- Runs the assemble script in the builder image
- Sets the run script in the builder image as the default command
Buildah then creates the container image.
12.5. Customizing source-to-image images Copy linkLink copied to clipboard!
Source-to-image (S2I) builder images include assemble and run scripts, but the default behavior of those scripts is not suitable for all users. You can customize the behavior of an S2I builder that includes default scripts.
12.5.1. Invoking scripts embedded in an image Copy linkLink copied to clipboard!
Builder images provide their own version of the source-to-image (S2I) scripts that cover the most common use-cases. If these scripts do not fulfill your needs, S2I provides a way of overriding them by adding custom ones in the
.s2i/bin
Procedure
Look at the value of the
label to determine the location of the scripts inside of the builder image:io.openshift.s2i.scripts-url$ podman inspect --format='{{ index .Config.Labels "io.openshift.s2i.scripts-url" }}' wildfly/wildfly-centos7Example output
image:///usr/libexec/s2iYou inspected the
builder image and found out that the scripts are in thewildfly/wildfly-centos7directory./usr/libexec/s2iCreate a script that includes an invocation of one of the standard scripts wrapped in other commands:
.s2i/bin/assemblescript#!/bin/bash echo "Before assembling" /usr/libexec/s2i/assemble rc=$? if [ $rc -eq 0 ]; then echo "After successful assembling" else echo "After failed assembling" fi exit $rcThis example shows a custom assemble script that prints the message, runs the standard assemble script from the image, and prints another message depending on the exit code of the assemble script.
ImportantWhen wrapping the run script, you must use
for invoking it to ensure signals are handled properly. The use ofexecalso precludes the ability to run additional commands after invoking the default image run script.exec.s2i/bin/runscript#!/bin/bash echo "Before running application" exec /usr/libexec/s2i/run
Legal Notice
Copy linkLink copied to clipboard!
Copyright © Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of the OpenJS Foundation.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.