Search

Chapter 4. Getting started with Knative services

download PDF
Important

You are viewing documentation for a release of Red Hat OpenShift Serverless that is no longer supported. Red Hat OpenShift Serverless is currently supported on OpenShift Container Platform 4.3 and newer.

Knative services are Kubernetes services that a user creates to deploy a serverless application. Each Knative service is defined by a route and a configuration, contained in a .yaml file.

4.1. Creating a Knative service

To create a service, you must create the service.yaml file.

You can copy the sample below. This sample will create a sample golang application called helloworld-go and allows you to specify the image for that application.

apiVersion: serving.knative.dev/v1alpha1 1
kind: Service
metadata:
  name: helloworld-go 2
  namespace: default 3
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go 4
          env:
            - name: TARGET 5
              value: "Go Sample v1"
1
Current version of Knative
2
The name of the application
3
The namespace the application will use
4
The URL to the image of the application
5
The environment variable printed out by the sample application

4.2. Deploying a serverless application

To deploy a serverless application, you must apply the service.yaml file.

Procedure

  1. Navigate to the directory where the service.yaml file is contained.
  2. Deploy the application by applying the service.yaml file.

    $ oc apply --filename service.yaml

Now that service has been created and the application has been deployed, Knative will create a new immutable revision for this version of the application.

Knative will also perform network programming to create a route, ingress, service, and load balancer for your application, and will automatically scale your pods up and down based on traffic, including inactive pods.

Note

The first time that a Knative service is created in a namespace, that namespace will automatically receive a new networking configuration. This might cause the initial service to take longer than is usually required for a service to become ready.

If the namespace has no existing NetworkPolicy configuration, an "allow all" type policy will be applied automatically. This policy will be removed automatically if all Knative Services are removed from that namespace and no other NetworkPolicy configurations have been applied.

4.3. Connecting Knative Services to existing Kubernetes deployments

Knative Services can call a Kubernetes deployment in any namespace, provided that there are no existing additional network barriers.

A Kubernetes deployment can call a Knative Service if:

  • The Kubernetes deployment is in the same namespace as the target Knative Service.
  • The Kubernetes deployment is in a namespace that was manually added to the ServiceMeshMemberRoll in knative-serving-ingress.
  • The Kubernetes deployment uses the target Knative Service’s public URL.

    Note

    Knative Services are accessed using a public URL by default. The target Knative Service must not be configured as a private, cluster-local visibility service if you want to connect it to your existing Kubernetes deploying using a public URL.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.