Chapter 1. Storing models


You must store your model before you can deploy it. You can store a model in an S3 bucket, URI or Open Container Initiative (OCI) containers.

1.1. Using OCI containers for model storage

As an alternative to storing a model in an S3 bucket or URI, you can upload models to Open Container Initiative (OCI) containers. Deploying models from OCI containers is also known as modelcars in KServe.

Using OCI containers for model storage can help you:

  • Reduce startup times by avoiding downloading the same model multiple times.
  • Reduce disk space usage by reducing the number of models downloaded locally.
  • Improve model performance by allowing pre-fetched images.

Using OCI containers for model storage involves the following tasks:

1.2. Storing a model in an OCI image

You can store a model in an OCI image. The following procedure uses the example of storing a MobileNet v2-7 model in ONNX format.

Prerequisites

  • You have a model in the ONNX format. The example in this procedure uses the MobileNet v2-7 model in ONNX format.
  • You have installed the Podman tool.

Procedure

  1. In a terminal window on your local machine, create a temporary directory for storing both the model and the support files that you need to create the OCI image:

    cd $(mktemp -d)
    Copy to Clipboard Toggle word wrap
  2. Create a models folder inside the temporary directory:

    mkdir -p models/1
    Copy to Clipboard Toggle word wrap
    Note

    This example command specifies the subdirectory 1 because OpenVINO requires numbered subdirectories for model versioning. If you are not using OpenVINO, you do not need to create the 1 subdirectory to use OCI container images.

  3. Download the model and support files:

    DOWNLOAD_URL=https://github.com/onnx/models/raw/main/validated/vision/classification/mobilenet/model/mobilenetv2-7.onnx
    curl -L $DOWNLOAD_URL -O --output-dir models/1/
    Copy to Clipboard Toggle word wrap
  4. Use the tree command to confirm that the model files are located in the directory structure as expected:

    tree
    Copy to Clipboard Toggle word wrap

    The tree command should return a directory structure similar to the following example:

    .
    ├── Containerfile
    └── models
        └── 1
            └── mobilenetv2-7.onnx
    Copy to Clipboard Toggle word wrap
  5. Create a Docker file named Containerfile:

    Note
    • Specify a base image that provides a shell. In the following example, ubi9-micro is the base container image. You cannot specify an empty image that does not provide a shell, such as scratch, because KServe uses the shell to ensure the model files are accessible to the model server.
    • Change the ownership of the copied model files and grant read permissions to the root group to ensure that the model server can access the files. OpenShift runs containers with a random user ID and the root group ID.
    FROM registry.access.redhat.com/ubi9/ubi-micro:latest
    COPY --chown=0:0 models /models
    RUN chmod -R a=rX /models
    
    # nobody user
    USER 65534
    Copy to Clipboard Toggle word wrap
  6. Use podman build commands to create the OCI container image and upload it to a registry. The following commands use Quay as the registry.

    Note

    If your repository is private, ensure that you are authenticated to the registry before uploading your container image.

    podman build --format=oci -t quay.io/<user_name>/<repository_name>:<tag_name> .
    podman push quay.io/<user_name>/<repository_name>:<tag_name>
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat