Questo contenuto non è disponibile nella lingua selezionata.

Chapter 1. Getting started with Knative Serving


1.1. Creating serverless applications

Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application by using OpenShift Serverless, you must create a Knative Service object.

Example Knative Service object YAML file

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: showcase 
1

  namespace: default 
2

spec:
  template:
    spec:
      containers:
        - image: quay.io/openshift-knative/showcase 
3

          env:
            - name: GREET 
4

              value: Ciao

1
The name of the application.
2
The namespace the application uses.
3
The image of the application.
4
The environment variable printed out by the sample application.

You can create a serverless application by using one of the following methods:

  • Create a Knative service from the OpenShift Container Platform web console.

    For OpenShift Container Platform, see Creating applications for more information.

  • Create a Knative service by using the Knative (kn) CLI.
  • Create and apply a Knative Service object as a YAML file, by using the oc CLI.

1.1.1. Creating serverless applications by using the Knative CLI

Using the Knative (kn) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have installed the Knative (kn) CLI.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

Procedure

  • Create a Knative service:

    $ kn service create <service_name> --image <image> --tag <tag_value>

    Where:

    • --image is the URI of the image for the application.
    • --tag is an optional flag that you can use to add a tag to the initial revision that Knative creates with the service.

      You get an output similar to the following example command:

      $ kn service create showcase \
          --image quay.io/openshift-knative/showcase

      You get an output similar to the following example:

      Creating service 'showcase' in namespace 'default':
      
        0.271s The Route is still working to reflect the latest desired specification.
        0.580s Configuration "showcase" is waiting for a Revision to become ready.
        3.857s ...
        3.861s Ingress has not yet been reconciled.
        4.270s Ready to serve.
      
      Service 'showcase' created with latest revision 'showcase-00001' and URL:
      http://showcase-default.apps-crc.testing

1.1.2. Creating serverless applications using YAML

Use YAML files to create Knative resources with a declarative API. Define a Knative Service object in a YAML file and apply it by using oc apply to deploy your serverless application.

When you create the service and deploy the application, Knative creates an immutable revision for that version of the application. Knative also configures the route, ingress, service, and load balancer for the application and scales pods based on traffic.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
  • Install the OpenShift CLI (oc).

Procedure

  1. Create a YAML file containing the following sample code:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: showcase
      namespace: default
    spec:
      template:
        spec:
          containers:
            - image: quay.io/openshift-knative/showcase
              env:
                - name: GREET
                  value: Bonjour
  2. Navigate to the directory that has the YAML file and run the following command to deploy the application:

    $ oc apply -f <filename>

1.1.3. Creating a service using offline mode

You can run kn service commands in offline mode so the command does not change the cluster and instead creates a service descriptor file on your local machine. After you create the descriptor file, you can change it before propagating changes to the cluster.

Important

The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.
  • You have installed the Knative (kn) CLI.

Procedure

  1. In offline mode, create a local Knative service descriptor file:

    $ kn service create showcase \
        --image quay.io/openshift-knative/showcase \
        --target ./ \
        --namespace test

    You get an output similar to the following example:

    Service 'showcase' created in namespace 'test'.
    • The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree.

      If you do not specify an existing directory but use a filename, such as --target my-service.yaml, the command does not create a directory tree. Instead, the command creates the service descriptor file my-service.yaml in the current directory.

      The filename can have the .yaml, .yml, or .json extension. Choosing .json creates the service descriptor file in the JSON format.

    • The --namespace test option places the new service in the test namespace.

      If you do not use --namespace and you log in to an OpenShift Container Platform cluster, the command creates the descriptor file in the current namespace. If you do not log in to a cluster, the command creates the descriptor file in the default namespace.

  2. Examine the created directory structure:

    $ tree ./

    You get an output similar to the following example:

    ./
    └── test
        └── ksvc
            └── showcase.yaml
    
    2 directories, 1 file
    • The ./ directory specified with --target now has a test/ directory named after the specified namespace.
    • The test/ directory has the ksvc directory, named after the resource type.
    • The ksvc directory has the descriptor file showcase.yaml, named according to the specified service name.
  3. Examine the generated service descriptor file:

    $ cat test/ksvc/showcase.yaml

    You get an output similar to the following example:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      creationTimestamp: null
      name: showcase
      namespace: test
    spec:
      template:
        metadata:
          annotations:
            client.knative.dev/user-image: quay.io/openshift-knative/showcase
          creationTimestamp: null
        spec:
          containers:
          - image: quay.io/openshift-knative/showcase
            name: ""
            resources: {}
    status: {}
  4. List information about the new service:

    $ kn service describe showcase --target ./ --namespace test

    You get an output similar to the following example:

    Name:       showcase
    Namespace:  test
    Age:
    URL:
    
    Revisions:
    
    Conditions:
      OK TYPE    AGE REASON
    • The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories.

      You can also directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml, .yml, and .json.

    • The --namespace option specifies the namespace, which communicates to kn the subdirectory that has the necessary service descriptor file.

      If you do not use --namespace and you log in to an OpenShift Container Platform cluster, kn searches for the service in the subdirectory that matches the current namespace. Otherwise, kn searches in the default/ subdirectory.

  5. Use the service descriptor file to create the service on the cluster:

    $ kn service create -f test/ksvc/showcase.yaml

    You get an output similar to the following example:

    Creating service 'showcase' in namespace 'test':
    
      0.058s The Route is still working to reflect the latest desired specification.
      0.098s ...
      0.168s Configuration "showcase" is waiting for a Revision to become ready.
     23.377s ...
     23.419s Ingress has not yet been reconciled.
     23.534s Waiting for load balancer to be ready
     23.723s Ready to serve.
    
    Service 'showcase' created to latest revision 'showcase-00001' is available at URL:
    http://showcase-test.apps.example.com

1.1.4. Verifying your serverless application deployment

To verify that your serverless application deployed successfully, get the application URL that Knative created. Then send a request to that URL and observe the output. OpenShift Serverless supports both HTTP and HTTPS URLs. However, the oc get ksvc command always prints URLs in the http:// format.

Prerequisites

  • You have installed the OpenShift Serverless Operator and Knative Serving your cluster.
  • You have installed the oc CLI.
  • You have created a Knative service.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Find the application URL:

    $ oc get ksvc <service_name>

    Example output

    NAME       URL                                   LATESTCREATED    LATESTREADY      READY   REASON
    showcase   http://showcase-default.example.com   showcase-00001   showcase-00001   True

  2. Make a request to your cluster and observe the output.

    Example HTTP request (using HTTPie tool)

    $ http showcase-default.example.com

    Example HTTPS request

    $ https showcase-default.example.com

    Example output

    HTTP/1.1 200 OK
    Content-Type: application/json
    Server: Quarkus/2.13.7.Final-redhat-00003 Java/17.0.7
    X-Config: {"sink":"http://localhost:31111","greet":"Ciao","delay":0}
    X-Version: v0.7.0-4-g23d460f
    content-length: 49
    
    {
        "artifact": "knative-showcase",
        "greeting": "Ciao"
    }

  3. Optional. If you do not have the HTTPie tool installed on your system, you can likely use curl tool instead:

    Example HTTPS request

    $ curl http://showcase-default.example.com

    Example output

    {"artifact":"knative-showcase","greeting":"Ciao"}

  4. Optional. If you receive an error relating to a self-signed certificate in the certificate chain, you can add the --verify=no flag to the HTTPie command to ignore the error:

    $ https --verify=no showcase-default.example.com

    Example output

    HTTP/1.1 200 OK
    Content-Type: application/json
    Server: Quarkus/2.13.7.Final-redhat-00003 Java/17.0.7
    X-Config: {"sink":"http://localhost:31111","greet":"Ciao","delay":0}
    X-Version: v0.7.0-4-g23d460f
    content-length: 49
    
    {
        "artifact": "knative-showcase",
        "greeting": "Ciao"
    }

    Important

    You must not use Self-signed certificates in a production deployment. This method is only for testing purposes.

  5. Optional. If your OpenShift Container Platform cluster uses a certificate signed by a certificate authority (CA) that your system does not yet trust, specify the certificate when you run the curl command. Pass the certificate path to the curl command by using the --cacert flag:

    $ curl https://showcase-default.example.com --cacert <file>

    Example output

    {"artifact":"knative-showcase","greeting":"Ciao"}

Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2026 Red Hat
Torna in cima