This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Installing OpenShift Serverless
OpenShift Serverless is not tested or supported for installation in a restricted network environment.
3.1. Cluster size requirements Copia collegamentoCollegamento copiato negli appunti!
The cluster must be sized appropriately to ensure that OpenShift Serverless can run correctly. You can use the MachineSet API to manually scale your cluster up to the desired size.
An OpenShift cluster with 10 CPUs and 40 GB memory is the minimum requirement for getting started with your first serverless application. This usually means you must scale up one of the default MachineSets by two additional machines.
For this configuration, the requirements depend on the deployed applications. By default, each pod requests ~400m of CPU and recommendations are based on this value. In the given recommendation, an application can scale up to 10 replicas. Lowering the actual CPU request of the application further pushes the boundary.
The numbers given only relate to the pool of worker machines of the OpenShift cluster. Master nodes are not used for general scheduling and are omitted.
For more advanced use-cases, such as using OpenShift logging, monitoring, metering, and tracing, you must deploy more resources. Recommended requirements for such use-cases are 24 vCPUs and 96GB of memory.
Additional resources
For more information on using the MachineSet API, see Creating MachineSets.
3.1.1. Scaling a MachineSet manually Copia collegamentoCollegamento copiato negli appunti!
If you must add or remove an instance of a machine in a MachineSet, you can manually scale the MachineSet.
Prerequisites
-
Install an OpenShift Container Platform cluster and the
oc
command line. -
Log in to
oc
as a user withcluster-admin
permission.
Procedure
View the MachineSets that are in the cluster:
oc get machinesets -n openshift-machine-api
$ oc get machinesets -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The MachineSets are listed in the form of
<clusterid>-worker-<aws-region-az>
.Scale the MachineSet:
oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Or:
oc edit machineset <machineset> -n openshift-machine-api
$ oc edit machineset <machineset> -n openshift-machine-api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can scale the MachineSet up or down. It takes several minutes for the new machines to be available.
ImportantBy default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to
0
unless you first relocate the router pods.
3.2. Installing Service Mesh Copia collegamentoCollegamento copiato negli appunti!
An installed version of Service Mesh is required for the installation of OpenShift Serverless. For details, see the OpenShift Container Platform documentation on Installing Service Mesh.
Use the Service Mesh documentation for Operator installation only. Once you install the Operators, use the documentation below to install the Service Mesh Control Plane and Member Roll.
3.2.1. Installing the ServiceMeshControlPlane Copia collegamentoCollegamento copiato negli appunti!
Service Mesh is comprised of a data plane and a control plane. After you install the ServiceMesh operator, you can install the control plane. The control plane manages and configures the sidecar proxies to enforce policies and collect telemetry. The following procedure installs a version of the ServiceMesh control plane that acts as an ingress to your applications.
You must install the control plane into the istio-system
namespace. Other namespaces are currently not supported.
Sample YAML file
Autoscaling is disabled in this version. This release is not intended for production use.
Running ServiceMesh with a sidecar injection enabled with OpenShift Serverless is currently not recommended.
Prerequisite
- An account with cluster administrator access.
- The ServiceMesh operator must be installed.
Procedure
- Log in to your OpenShift Container Platform installation as a cluster administrator.
Run the following command to create the
istio-system
namespace:oc new-project istio-system
$ oc new-project istio-system
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Copy the sample YAML file into a
smcp.yaml
file. Apply the YAML file using the command:
oc apply -f smcp.yaml
$ oc apply -f smcp.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run this command to watch the progress of the pods during the installation process:
oc get pods -n istio-system -w
$ oc get pods -n istio-system -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.2. Installing a ServiceMeshMemberRoll Copia collegamentoCollegamento copiato negli appunti!
You must have a Service Mesh Member Roll for the control plane namespace, if the Service Mesh is configured for multi-tenancy. For applications to use the deployed control plane and ingress, their namespaces must be part of a member roll.
A multi-tenant control plane installation only affects namespaces configured as part of the Service Mesh. You must specify the namespaces associated with the Service Mesh in a ServiceMeshMemberRoll
resource located in the same namespace as the ServiceMeshControlPlane
resource and name it default
.
ServiceMeshMemberRoll Custom Resource Example
Prerequisites
- Installed Service Mesh Operator.
- A custom resource file that defines the parameters of your Red Hat OpenShift Service Mesh control plane.
Procedure
- Create a YAML file that replicates the ServiceMeshMemberRoll Custom Resource sample.
Configure the YAML file to include relevant namespaces.
NoteAdd all namespaces to which you want to deploy serverless applications. Ensure you retain the
knative-serving
namespace in the member roll.Copy the configured YAML into a file
smmr.yaml
and apply it using:oc apply -f smmr.yaml
$ oc apply -f smmr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Installing the OpenShift Serverless Operator Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Serverless Operator can be installed using the OpenShift Container Platform instructions for installing Operators.
You can install the OpenShift Serverless Operator in the host cluster by following the OpenShift Container Platform instructions on installing an Operator.
The OpenShift Serverless Operator only works for OpenShift Container Platform versions 4.1.13 and later.
For details, see the OpenShift Container Platform documentation on adding Operators to a cluster.
3.4. Installing Knative Serving Copia collegamentoCollegamento copiato negli appunti!
You must create a KnativeServing
object to install Knative Serving using the OpenShift Serverless Operator.
You must create the KnativeServing
object in the knative-serving
namespace, as shown in the sample YAML, or it is ignored.
Sample serving.yaml
Prerequisite
- An account with cluster administrator access.
- Installed OpenShift Serverless Operator.
Procedure
Copy the sample YAML file into
serving.yaml
and apply it using:oc apply -f serving.yaml
$ oc apply -f serving.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the installation is complete by using the command:
oc get knativeserving/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
$ oc get knativeserving/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Results should be similar to:
DeploymentsAvailable=True InstallSucceeded=True Ready=True
DeploymentsAvailable=True InstallSucceeded=True Ready=True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Uninstalling Knative Serving Copia collegamentoCollegamento copiato negli appunti!
To uninstall Knative Serving, you must remove its custom resource and delete the knative-serving
namespace.
Prerequisite
- Installed Knative Serving
Procedure
To remove Knative Serving, use the following command:
oc delete knativeserving knative-serving -n knative-serving
$ oc delete knativeserving knative-serving -n knative-serving
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the command has completed and all pods have been removed from the
knative-serving
namespace, delete the namespace by using the command:oc delete namespace knative-serving
$ oc delete namespace knative-serving
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Deleting the OpenShift Serverless Operator Copia collegamentoCollegamento copiato negli appunti!
You can remove the OpenShift Serverless Operator from the host cluster by following the OpenShift Container Platform instructions on deleting an Operator.
For details, see the OpenShift Container Platform documentation on deleting Operators from a cluster.
3.7. Deleting Knative Serving CRDs from the Operator Copia collegamentoCollegamento copiato negli appunti!
After uninstalling the OpenShift Serverless Operator, the Operator CRDs and API services remain on the cluster. Use this procedure to completely uninstall the remaining components.
Prerequisite
- You have uninstalled Knative Serving and removed the OpenShift Serverless Operator using the previous procedure.
Procedure
Run the following command to delete the remaining Knative Serving CRDs:
oc delete crd knativeservings.serving.knative.dev
$ oc delete crd knativeservings.serving.knative.dev
Copy to Clipboard Copied! Toggle word wrap Toggle overflow