此内容没有您所选择的语言版本。

Chapter 8. Patient Portal


A simple database-backed web application that runs in the public cloud but keeps its data in a private database

This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.

Overview

This example is a simple database-backed web application that shows how you can use Skupper to access a database at a remote site without exposing it to the public internet.

It contains three services:

  • A PostgreSQL database running on a bare-metal or virtual machine in a private data center.
  • A payment-processing service running on Kubernetes in a private data center.
  • A web frontend service running on Kubernetes in the public cloud. It uses the PostgreSQL database and the payment-processing service.

The example uses two Kubernetes namespaces, private and public, to represent the Kubernetes cluster in the private data center and the cluster in the public cloud. It uses Podman to run the database.

Prerequisites

Procedure

  • Clone the repo for this example.
  • Install the Skupper command-line tool
  • Set up your Kubernetes namespaces
  • Set up your Podman network
  • Deploy the application
  • Create your sites
  • Link your sites
  • Expose application services
  • Access the frontend

    1. Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository.
    2. Install the Skupper command-line tool

      This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment.

      See the Installation for details about installing the CLI. For configured systems, use the following command:

      sudo dnf install skupper-cli
    3. Set up your Kubernetes namespaces

      Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate.

      Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it.

      A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.

      For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context.

      Note

      The login procedure varies by provider. See the documentation for yours:

      Public:

      export KUBECONFIG=~/.kube/config-public
      # Enter your provider-specific login command
      kubectl create namespace public
      kubectl config set-context --current --namespace public

      Private:

      export KUBECONFIG=~/.kube/config-private
      # Enter your provider-specific login command
      kubectl create namespace private
      kubectl config set-context --current --namespace private
    4. Set up your Podman network

      Open a new terminal window and set the SKUPPERPLATFORM environment variable to podman. This sets the Skupper platform to Podman for this terminal session.

      Use podman network create to create the Podman network that Skupper will use.

      Use systemctl to enable the Podman API service.

      Podman:

      export SKUPPERPLATFORM=podman
      podman network create skupper
      systemctl --user enable --now podman.socket

      If the systemctl command doesn’t work, you can try the podman system service command instead:

      podman system service --time=0 unix://$XDGRUNTIMEDIR/podman/podman.sock &
    5. Deploy the application

      Use kubectl apply to deploy the frontend and payment processor on Kubernetes. Use podman run to start the database on your local machine.

      Note

      It is important to name your running container using --name to avoid a collision with the container that Skupper creates for accessing the service.

      Note

      You must use --network skupper with the podman run command.

      Public:

      kubectl apply -f frontend/kubernetes.yaml

      Private:

      kubectl apply -f payment-processor/kubernetes.yaml

      Podman:

      podman run --name database-target --network skupper --detach --rm -p 5432:5432 quay.io/skupper/patient-portal-database
    6. Create your sites

      Public:

      skupper init

      Private:

      skupper init --ingress none

      Podman:

      skupper init --ingress none
    7. Link your sites

      Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create.

      The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it.

      Note

      The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it.

      First, use skupper token create in site Public to generate the token. Then, use skupper link create in site Private to link the sites.

      Public:

      skupper token create --uses 2 ~/secret.token

      Private:

      skupper link create ~/secret.token

      Podman:

      skupper link create ~/secret.token

      If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation.

    8. Expose application services

      In Private, use skupper expose to expose the payment processor service.

      In Podman, use skupper service create and skupper service bind to expose the database on the Skupper network.

      Then, in Public, use skupper service create to make it available.

      Note

      Podman sites do not automatically replicate services to remote sites. You need to use skupper service create on each site where you wish to make a service available.

      Private:

      skupper expose deployment/payment-processor --port 8080

      Podman:

      skupper service create database 5432
      skupper service bind database host database-target --target-port 5432

      Public:

      skupper service create database 5432
    9. Access the frontend

      In order to use and test the application, we need external access to the frontend.

      Use kubectl expose with --type LoadBalancer to open network access to the frontend service.

      Once the frontend is exposed, use kubectl get service/frontend to look up the external IP of the frontend service. If the external IP is <pending>, try again after a moment.

      Once you have the external IP, use curl or a similar tool to request the /api/health endpoint at that address.

      Note

      The <external-ip> field in the following commands is a placeholder. The actual value is an IP address.

      Public:

      kubectl expose deployment/frontend --port 8080 --type LoadBalancer
      kubectl get service/frontend
      curl http://<external-ip>:8080/api/health

      Sample output:

      $ kubectl expose deployment/frontend --port 8080 --type LoadBalancer
      service/frontend exposed
      
      $ kubectl get service/frontend
      NAME       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
      frontend   LoadBalancer   10.103.232.28   <external-ip>   8080:30407/TCP   15s
      
      $ curl http://<external-ip>:8080/api/health
      OK

      If everything is in order, you can now access the web interface by navigating to http://<external-ip>:8080/ in your browser.

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.