Chapter 4. Customizing model deployments
You can customize a model’s deployment to suit your specific needs, for example, to deploy a particular family of models or to enhance an existing deployment. You can modify the runtime configuration for a specific deployment by setting additional serving runtime arguments and environment variables.
These customizations apply only to the selected model deployment and do not change the default runtime configuration. You can set these parameters when you first deploy a model or by editing an existing deployment.
4.1. Customizing the parameters of a deployed model-serving runtime Copy linkLink copied to clipboard!
You might need additional parameters beyond the default ones to deploy specific models or to enhance an existing model deployment. In such cases, you can modify the parameters of an existing runtime to suit your deployment needs.
Customizing the parameters of a runtime only affects the selected model deployment.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- You have deployed a model.
Procedure
From the OpenShift AI dashboard, click AI hub
Deployments. The Deployments page opens.
- Click Stop next to the name of the model you want to customize.
Click the action menu (⋮) and select Edit.
The Configuration parameters section shows predefined serving runtime parameters, if any are available.
Customize the runtime parameters in the Configuration parameters section:
- Modify the values in Additional serving runtime arguments to define how the deployed model behaves.
Modify the values in Additional environment variables to define variables in the model’s environment.
NoteDo not modify the port or model serving runtime arguments, because they require specific values to be set. Overwriting these parameters can cause the deployment to fail.
- After you are done customizing the runtime parameters, click Redeploy to save.
- Click Start to deploy the model with your changes.
Verification
- Confirm that the deployed model is shown on the Deployments tab for the project, and on the Deployments page of the dashboard with a checkmark in the Status column.
Confirm that the arguments and variables that you set appear in
spec.predictor.model.argsandspec.predictor.model.envby one of the following methods:- Checking the InferenceService YAML from the OpenShift Console.
Using the following command in the OpenShift CLI:
oc get -o json inferenceservice <inferenceservicename/modelname> -n <projectname>
4.2. Customizable model serving runtime parameters Copy linkLink copied to clipboard!
You can modify the parameters of an existing model serving runtime to suit your deployment needs.
For more information about parameters for each of the supported serving runtimes, see the following table:
| Serving runtime | Resource |
|---|---|
| NVIDIA Triton Inference Server | |
| OpenVINO Model Server | |
| Seldon MLServer | |
| vLLM NVIDIA GPU ServingRuntime for KServe |
vLLM: Engine Arguments |
| vLLM AMD GPU ServingRuntime for KServe |
vLLM: Engine Argument |
| vLLM Intel Gaudi Accelerator ServingRuntime for KServe |
vLLM: Engine Arguments |
4.3. Customizing the vLLM model-serving runtime Copy linkLink copied to clipboard!
In certain cases, you may need to add additional flags or environment variables to the vLLM ServingRuntime for KServe runtime to deploy a family of LLMs.
The following procedure describes customizing the vLLM model-serving runtime to deploy a Llama, Granite or Mistral model.
Prerequisites
- You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges.
- For Llama model deployment, you have downloaded a meta-llama-3 model to your object storage.
- For Granite model deployment, you have downloaded a granite-7b-instruct or granite-20B-code-instruct model to your object storage.
- For Mistral model deployment, you have downloaded a mistral-7B-Instruct-v0.3 model to your object storage.
- You have enabled the vLLM ServingRuntime for KServe runtime.
- You have enabled GPU support in OpenShift AI and have installed and configured the Node Feature Discovery Operator on your cluster. For more information, see Installing the Node Feature Discovery Operator and Enabling NVIDIA GPUs
Procedure
- Follow the steps to deploy a model as described in Deploying models on the model serving platform.
- In the Serving runtime field, select vLLM ServingRuntime for KServe.
If you are deploying a meta-llama-3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
–-distributed-executor-backend=mp1 --max-model-len=61442 If you are deploying a granite-7B-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--distributed-executor-backend=mp1 - 1
- Sets the backend to multiprocessing for distributed model workers
If you are deploying a granite-20B-code-instruct model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--distributed-executor-backend=mp1 –-tensor-parallel-size=42 --max-model-len=64483 If you are deploying a mistral-7B-Instruct-v0.3 model, add the following arguments under Additional serving runtime arguments in the Configuration parameters section:
--distributed-executor-backend=mp1 --max-model-len=153442 - Click Deploy.
Verification
- Confirm that the deployed model is shown on the Deployments tab for the project, and on the Deployments page of the dashboard with a checkmark in the Status column.
For granite models, use the following example command to verify API requests to your deployed model:
curl -q -X 'POST' \ "https://<inference_endpoint_url>:443/v1/chat/completions" \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d "{ \"model\": \"<model_name>\", \"prompt\": \"<prompt>", \"max_tokens\": <max_tokens>, \"temperature\": <temperature> }"