Chapter 3. Managing Fuse Online on OCP
After you install Fuse Online on OpenShift Container Platform (OCP) on-site, you can use Prometheus to monitor integration activity, and you can set up periodic Fuse Online backups, which you can use to restore Fuse Online environments. As needed, you can upgrade Fuse Online, uninstall Fuse Online, or delete an OCP project that contains Fuse Online.
See the following topics for details:
- Section 3.1, “Auditing Fuse Online components”
- Section 3.2, “Monitoring Fuse Online integrations and infrastructure components with Prometheus”
- Section 3.3, “Fuse Online Metering labels”
- Section 3.4, “Backing up a Fuse Online environment”
- Section 3.5, “Restoring a Fuse Online environment”
- Section 3.6, “Upgrading Fuse Online”
- Section 3.7, “Uninstalling Fuse Online from an OCP project”
- Section 3.8, “Deleting an OCP project that contains Fuse Online”
3.1. Auditing Fuse Online components
Fuse Online auditing is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/
Fuse Online supports basic auditing for changes made by any user to the following Fuse Online components: * Connections - The Name
and any other fields shown on the connector’s Details page in the Fuse Online web console. * Connectors - The Name
field. * Integrations - The Name
field.
When a developer makes an update to one of these component fields (for example, changes the name of an integration), Fuse Online sends an AUDIT message to standard output that includes information such as ID, user, timestamp, component (connection
, connector
, or integration
), and the type of change (create
, modify
, or delete
). Note that the field values in an audit message are truncated to 30 characters.
By default, Fuse Online auditing is disabled. You can enable it by editing the Fuse Online custom resource. To enable auditing before you install Fuse Online, see Descriptions of custom resource attributes that configure Fuse Online.
Prerequisites
-
The
oc
client tool is installed and it is connected to the OCP cluster in which Fuse Online is installed. - You have permission to edit the Fuse Online custom resource.
Procedure
Log in to OpenShift with an account that gives you permission to edit the Fuse Online custom resource. For example:
oc login -u admin -p admin-password
Switch to the project that is running the Fuse Online environment. For example:
oc project my-fuse-online-project
Edit the syndesis custom resource:
Invoke the following command, which typically opens the resource in an editor:
oc edit syndesis
Ensure that the following lines are in the resource. Edit as needed:
components: server: features: auditing: true
Save the resource.
When you enable the auditing feature in the syndesis custom resource, the running
syndesis-server
configuration reloads and Fuse Online starts logging relevant changes to Fuse Online components.- To view the Fuse Online audit log messages, type the following command:
oc logs -l syndesis.io/component=syndesis-server
3.2. Monitoring Fuse Online integrations and infrastructure components with Prometheus
You can use Prometheus to monitor Fuse Online infrastructure components and Fuse Online integrations. You can also use Grafana dashboards to visualize the metrics gathered by Prometheus.
Red Hat support for Prometheus is limited to the setup and configuration recommendations provided in Red Hat product documentation.
Grafana is a community-supported feature. Deploying Grafana to monitor Red Hat Fuse products is not supported with Red Hat production service level agreements (SLAs).
In addition to monitoring Fuse Online integrations, you can use Prometheus to monitor the metrics exposed by the following Fuse Online infrastructure components:
- Syndesis Server
-
The
syndesis-server
component has been instrumented with Micrometer and exposes all of the JVM Micrometer metrics automatically by default. Additionally,syndesis-server
exposes metrics about the REST API endpoints, such as request rate, error rate, and latency. - Syndesis Meta
-
The
syndesis-meta
component has been instrumented with Micrometer and exposes all of the JVM Micrometer metrics automatically by default. It also exposes metrics about its REST API endpoints. - Syndesis DB
- Metrics for the Fuse Online Postgres database are exported using a third-party Prometheus exporter.
- Integrations
-
The
integration
metrics are visible after an integration has been created and are exported by using the official JMX exporter, which exposes several JVM metrics by default. Additionally, integration metrics expose metrics that are specific to Apache Camel, such as message rate and error rate.
Prerequisites
- Fuse Online is installed and running on OCP 4.9 (or later) on-site.
-
The
oc
client tool is installed and it is connected to the OCP cluster in which Fuse Online is installed. -
You have
admin
access to the OCP cluster. Your Fuse Online installation is configured with the
ops addon
enabled. If required, you can enable it with this command:oc patch syndesis/app --type=merge -p '{"spec": {"addons": {"ops": {"enabled": true}}}}'
Procedure
If there is an existing
openshift-monitoring
configuration, skip to Step 2.Otherwise, create an
openshift-monitoring
configuration, that sets the user workload monitoring option totrue
and then skip to Step 3:oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true EOF
If there is an existing
openshift-monitoring
configuration:Check the existing
openshift-monitoring
configuration to determine whether the user workload monitoring option is set to true:oc get -n openshift-monitoring cm/cluster-monitoring-config -ojsonpath='{.data.config\.yaml}'
If the result is
enableUserWorkload: true
, the user workload monitoring option is set to true. Skip to Step 3.If the result shows any other configurations, continue to the next step to enable the monitoring of user workloads by editing the ConfigMap.
Open the ConfigMap file in an editor, for example:
oc -n openshift-monitoring edit cm/cluster-monitoring-config
Set enableUserWorkload to true. For example:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true
- Save the ConfigMap file.
Use the following command to watch the status of the pods in the
openshift-user-workload-monitoring
namespace:oc -n openshift-user-workload-monitoring get pods -w
Wait until the status of the pods is Running, for example:
prometheus-operator-5d989f48fd-2qbzd 2/2 Running prometheus-user-workload-0 5/5 Running prometheus-user-workload-1 5/5 Running thanos-ruler-user-workload-0 3/3 Running thanos-ruler-user-workload-1 3/3 Running
Verify that the Fuse Online alert rules are enabled in Prometheus:
Access the internal prometheus instance
oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090
-
Open your browser to
localhost:9090
-
Select Status> Targets. You should see three
syndesis
endpoints. -
Press CTRL-C to terminate the
port-forward
process.
-
From the OperatorHub, install the Grafana Operator version 4 to a namespace of your choice, for example, to the
grafana-middleware
namespace. Use the update channelv4
Add a cluster role and a cluster role binding to allow the
grafana-operator
to list nodes and namespaces:Download the cluster role YAML file from the
grafana-operator
website:curl https://raw.githubusercontent.com/grafana-operator/grafana-operator/v4/deploy/cluster_roles/cluster_role_grafana_operator.yaml > tmp_role.yaml
Add cluster permission for the
grafana-operator
to read other namespaces and nodes:cat <<EOF >> tmp_role.yaml - apiGroups: - "" resources: - namespaces - nodes verbs: - get - list - watch EOF
oc apply -f tmp_role.yaml
oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-operator roleRef: name: grafana-operator kind: ClusterRole apiGroup: "" subjects: - kind: ServiceAccount name: grafana-operator-controller-manager namespace: grafana-middleware EOF
Enable the
grafana-operator
to read Grafana dashboards from other namespaces by using theDASHBOARD_NAMESPACES_ALL
environment variable to limit the namespaces:oc -n grafana-middleware patch subs/grafana-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"DASHBOARD_NAMESPACES_ALL","value":"true"}]}}}'
Check that the
grafana
pods are recreated:oc -n grafana-middleware get pods -w
Optionally, view the
grafana-operator
logs:oc -n grafana-middleware logs -f `oc -n grafana-middleware get pods -oname|grep grafana-operator-controller-manager` -c manager
Add a Grafana custom resource to start a Grafana server pod, for example:
oc apply -f - <<EOF apiVersion: integreatly.org/v1alpha1 kind: Grafana metadata: name: grafana-middleware namespace: grafana-middleware spec: config: auth: disable_signout_menu: true auth.anonymous: enabled: true log: level: warn mode: console security: admin_password: secret admin_user: root dashboardLabelSelector: - matchExpressions: - key: app operator: In values: - grafana - syndesis ingress: enabled: true EOF
Allow the
grafana-operator
to read monitoring information:oc -n grafana-middleware adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-serviceaccount
Add a
GrafanaDatasource
to querythanos-querier
:oc apply -f - <<EOF apiVersion: integreatly.org/v1alpha1 kind: GrafanaDataSource metadata: name: prometheus-grafanadatasource namespace: grafana-middleware spec: datasources: - access: proxy editable: true isDefault: true jsonData: httpHeaderName1: 'Authorization' timeInterval: 5s tlsSkipVerify: true name: Prometheus secureJsonData: httpHeaderValue1: "Bearer $(oc get secret $(oc get secret | grep grafana-serviceaccount-token | awk '{print$1}') -o=jsonpath="{.data.token}" | base64 -d)" type: prometheus url: "https://$(oc get route thanos-querier -n openshift-monitoring -ojsonpath='{.spec.host}')" name: prometheus-grafanadatasource.yaml EOF
View the grafana server log:
oc logs -f `oc get pods -l app=grafana -oname`
To access the grafana URL and view the Fuse Online dashboards:
echo "https://"$(oc -n grafana-middleware get route/grafana-route -ojsonpath='{.spec.host}')
In the left panel of the Grafana console, click the search button. A folder (OCP namespace name) containing the dashboards for each Syndesis instance is displayed.
- For Fuse Online integrations, select Integration - Camel. This dashboard displays the standard metrics exposed by Apache Camel integration applications.
For Fuse Online infrastructure components, select one of the following infrastructure dashboards:
- Infrastructure - DB
- Displays metrics related to the Fuse Online Postgres instance.
- Infrastructure - JVM
-
Displays metrics about the running JVM for the
syndesis-meta
orsyndesis-server
applications. Chose the application that you want to monitor from the Application drop down list at the top of the dashboard. - Infrastructure - REST APIs
-
Displays metrics relating to the Fuse Online infrastructure API endpoints, such as
request throughput
andlatency
. Choose the application that you want to monitor from the Application drop down list at the top of the dashboard.
Additional resources
For information about getting started with Prometheus, go to: https://prometheus.io/docs/prometheus/latest/getting_started/
3.3. Fuse Online Metering labels
You can use the OpenShift Metering operator to analyze your installed Fuse Online operator and components to determine whether you are in compliance with your Red Hat subscription. For more information on Metering, see the OpenShift documentation.
The following table lists the metering labels for Fuse Online infrastructure components and integrations.
Label | Possible values |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Examples
Infrastructure example (where the infrastructure component is syndesis-db)
com.company: Red_Hat rht.prod_name: Red_Hat_Integration rht.prod_ver: 7.8 rht.comp: Fuse rht.comp_ver: 7.8 rht.subcomp: syndesis-db rht.subcomp_t: infrastructure
Application example (where the integration deployment name is mytestapp)
com.company: Red_Hat rht.prod_name: Red_Hat_Integration rht.prod_ver: 7.8 rht.comp: Fuse rht.comp_ver: 7.8 rht.subcomp: i-mytestapp rht.subcomp_t: application
3.4. Backing up a Fuse Online environment
You can configure Fuse Online to periodically back up:
- The internal PostgreSQL database in which Fuse Online stores connections and integrations.
-
OpenShift resources that
syndesis-operator
creates and that are needed to run Fuse Online. This includes, but is not limited to, configuration maps, deployment configurations, and service accounts.
You can configure backups for a Fuse Online environment before you install Fuse Online or you can change the configuration of a Fuse Online environment to enable backups.
When Fuse Online is configured to perform backups, Fuse Online zips data into one file and uploads that file to an Amazon S3 bucket that you specify. You can apply a backup to a new Fuse Online environment (no connections or integrations defined) to restore the Fuse Online environment that was backed up.
Prerequisites
- OCP is running on-site.
-
The
oc
client tool is installed and connected to the OCP cluster in which Fuse Online is or will be running. - A user with cluster administration permissions gave you permission to install Fuse Online in any project that you have permission to access in the cluster.
- You have an AWS access key and an AWS secret key. For details about obtaining these credentials, see the AWS documentation for Managing Access Keys for IAM Users.
- You know the AWS region where the S3 bucket that you want to upload to resides.
- You know the name of the S3 bucket that you want to upload backups to.
Procedure
Log in to OpenShift with an account that has permission to install Fuse Online. For example:
oc login -u developer -p developer
Switch to the OpenShift project that is or will be running the Fuse Online environment for which you want to configure backups. For example:
oc project my-fuse-online-project
Create an OpenShift secret. In the command line:
-
Specify
syndesis-backup-s3
as shown in the following command format. Replace the AWS variables with your AWS access key, AWS secret key, AWS region in which the bucket resides, and the name of the bucket.
Use the following command format to create the secret:
oc create secret generic syndesis-backup-s3 \ --from-literal=secret-key-id="my-aws-access-key" \ --from-literal=secret-access-key="my-aws-secret-key" \ --from-literal=region="aws-region" \ --from-literal=bucket-name="aws-bucket-name"
This secret must be present when the backup job is running.
-
Specify
If Fuse Online is not yet installed, you must edit the
default-cr.yml
file to enable backups. See Editing thesyndesis
custom resource before installing Fuse Online. After Fuse Online is installed, there will be backup jobs according to the schedule that you specified in the custom resource.If Fuse Online is running, you must edit the
syndesis
custom resource:Invoke the following command, which opens the
syndesis
custom resource in an editor:oc edit syndesis
Add the following under
spec:
:backup: schedule: my-backup-interval
Replace
my-backup-interval
with the desired duration between backups. To determine how to specify the interval between backups, consult the following resources:- cron pre-defined schedules
Do not specify the
@
sign in front of the interval. For example, to configure daily backups, the custom resource would contain something like this:apiVersion: syndesis.io/v1beta1 kind: Syndesis metadata: name: app spec: backup: schedule: daily
Save the file.
This adds a backup job to
syndesis-operator
.
Result
If Fuse Online was already running, there is now a Fuse Online backup job according to the schedule that you defined.
Next steps
If Fuse Online needs to be installed, edit the default-cr.yml
file to enable any other desired features or set any other parameters. When the default-cr.yml
file has all the settings that you want, install Fuse Online in the project that you specified when you created the OpenShift secret.
3.5. Restoring a Fuse Online environment
In a new Fuse Online environment, in which you have not yet created any connections or integrations, you can restore a backup of a Fuse Online environment. After you restore a Fuse Online environment, you must edit the restored connections to update their passwords. You should then be able to publish the restored integrations.
Prerequisites
- OCP is running on-site.
-
The
oc
client tool is installed and connected to the OCP cluster in which you want to restore a Fuse Online environment. - A user with cluster administration permissions gave you permission to install Fuse Online in any project that you have permission to access in the cluster.
- There is a Fuse Online environment that was configured to periodically back up data and upload the data to Amazon S3.
- The Fuse Online release number, for example, 7.6, is the same for the Fuse Online environment that was backed up and the Fuse Online environment in which you want to restore the backup.
- You have permission to access the AWS bucket that contains the Fuse Online backups.
- The Fuse Online environment in which you want to restore a backup is a new Fuse Online installation. In other words, there are no connections or integrations that you defined. If you want to restore Fuse Online in a project that has a Fuse Online environment with connections and integrations, then you must uninstall that Fuse Online environment and install a new Fuse Online environment.
Procedure
- Download the desired backup file from Amazon S3. Details for doing this are in the AWS documentation for How Do I Download an Object from an S3 Bucket?
Extract the content of the zip file. For example, the following command line unzips the
7.6-2020-03-15-23:30:00.zip
file and copies the content into the/tmp/fuse-online-backup
folder:unzip 7.6-2020-03-15-23:30:00.zip -d /tmp/fuse-online-backup
Decode the Fuse Online database, for example:
base64 -d /tmp/fuse-online-backup/syndesis-db.dump > /tmp/fuse-online-backup/syndesis-db
Switch to the OpenShift project that is running the new Fuse Online environment. For example, if the new Fuse Online environment is in the
my-fuse-online-project
, then you would invoke the following command:oc project my-fuse-online-project
The remainder of this procedure assumes that you have switched to the project that contains the new Fuse Online environment.
Obtain the name of the database pod.
If the restored Fuse Online environment uses the provided, internal, PostgreSQL database, invoke the following command to obtain the name of the database pod:
oc get pods -l deploymentconfig=syndesis-db -o jsonpath='{.items[*].metadata.name}'
If the restored Fuse Online environment uses an external database, it is assumed that you know how to obtain the name of the pod for that database.
In the remaining commands, where you see
DATABASE_POD_NAME
, insert the name of the database pod for the restored Fuse Online environment.Scale down the components that are accessing the database in any way.
Scale down
syndesis-operator
so that other components can be scaled down:oc scale deployment syndesis-operator --replicas 0
Scale down the
syndesis-server
andsyndesis-meta
components:oc scale dc syndesis-server --replicas 0
oc scale dc syndesis-meta --replicas 0
Send the database backup file to the Fuse Online database pod:
oc cp /tmp/fuse-online-backup/syndesis-db DATABASE_POD_NAME:/tmp/syndesis-db
Open a remote shell session in the Fuse Online database pod:
oc rsh DATABASE_POD_NAME
Invoke the following commands to restore the Fuse Online database.
If a
psql
command prompts for the database password, and the restored Fuse Online environment uses the provided, internal PostgreSQL database, you can find the password in thePOSTGRESQL_PASSWORD
environment variable in thesyndesis-db
deployment configuration. If the restored Fuse Online environment uses an external database, then it is assumed that you know the password.cd /tmp psql -c 'DROP database if exists syndesis_restore' psql -c 'CREATE database syndesis_restore' pg_restore -v -d syndesis_restore /tmp/syndesis-db psql -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'syndesis'" psql -c 'DROP database if exists syndesis' psql -c 'ALTER database syndesis_restore rename to syndesis'
Fuse Online should now be restored. You can end the RSH session:
exit
Scale up the Fuse Online components:
oc scale deployment syndesis-operator --replicas 1
Scaling
syndesis-operator
to1
should bring up the other pods that were scaled down. However, if that does not happen, you can scale them up manually:oc scale dc syndesis-server --replicas 1
oc scale dc syndesis-meta --replicas 1
The server tries to start each restored integration but you need to update connections first. Consequently, ensure that the restored integrations are not running:
Obtain the Fuse Online console route:
echo "https://$(oc get route/syndesis -o jsonpath='{.spec.host}' )"
- Log in into the Fuse Online console with an OpenShift user account that has permission to install Fuse Online.
- Display the list of integrations and ensure that all integrations are stopped. If an integration is running, stop it.
For each connection that has a password, you need to update the connection to have the correct password for this Fuse Online environment. The following steps show how to do this for the provided PostgresDB connection.
-
In the OpenShift console for the project in which this restored Fuse Online environment is running, retrieve the password for the PostgresDB connection. In the
syndesis-db
deployment, the password is available in the environment variables. - In the Fuse Online console, display the connections.
- Edit the PostgresDB connection.
- In the connection details for the PostgresDB connection, paste the retrieved password in the Password field.
-
In the OpenShift console for the project in which this restored Fuse Online environment is running, retrieve the password for the PostgresDB connection. In the
For each integration, confirm that there are no Configuration Required indicators. If there are, edit the integration to resolve the issues. When all steps in the integration are correct, publish the integration.
If Fuse Online keeps rolling an integration back to a Stopped state right after the Build step, delete the deployment, ensure that no configuration is required, and try publishing the integration again.
You can safely ignore the following message if you see it in the log:
Error performing GET request to https://syndesis-my-fuse-online-project.my-cluster-url/api/v1/metrics/integrations
3.6. Upgrading Fuse Online
From time to time, fresh application images, which incorporate patches and security fixes, are released for Fuse Online. You are notified of these updates through Red Hat’s errata update channel. You can then upgrade your Fuse Online images.
For OCP 4.x, upgrade from Fuse Online 7.11 to 7.12 by following the steps in Upgrading Fuse Online by using the OperatorHub.
You should determine whether upgrading to Fuse Online 7.12 requires you to make changes to your existing integrations. Even if no changes are required, you must republish any running integrations when you upgrade Fuse Online.
3.6.1. Upgrading Fuse Online by using the OperatorHub (OCP 4.x)
Use the OpenShift OperatorHub to upgrade from Fuse Online 7.11 to 7.12.
- Fuse Online 7.12 requires OpenShift Container Platform (OCP) 4.6 or later. If you are using OCP 4.5 or earlier, you must upgrade to OCP 4.6 or later, if you want to upgrade to Fuse Online 7.12 .
On OCP 4.9, When you upgrade to 7.11, the following warning is displayed during the Fuse Online Operator upgrade process:
W1219 18:38:58.064578 1 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
This warning appears because clients (that Fuse Online uses for the Kubernetes/OpenShift API initialization code) access a deprecated Ingress version. This warning is not an indicator of complete use of deprecated APIs and there is no issue with upgrading to Fuse Online 7.11.
The upgrade process from a Fuse Online 7.11 or an earlier 7.12 version to a newer Fuse Online 7.12 version depends on the Approval Strategy that you selected when you installed Fuse Online:
- For Automatic updates, when a new version of the Fuse Online operator is available, the OpenShift Operator Lifecycle Manager (OLM) automatically upgrades the running instance of the Fuse Online without human intervention.
- For Manual updates, when a newer version of an Operator is available, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Fuse Online operator updated to the new version as described in the Manually approving a pending Operator upgrade section of the OpenShift documentation.
During and after an infrastructure upgrade, existing integrations continue to run with the older versions of Fuse Online libraries and dependencies.
To have existing integrations run with the updated Fuse Online version, you must republish the integrations.
3.6.2. Upgrading Fuse Online integrations
When you upgrade to Fuse Online 7.11, you should determine whether you need to make changes to your existing integrations.
Review the Apache Camel updates described in Camel Migration Considerations.
Even if your integrations do not require changes, you must republish any running integrations because during and after an infrastructure upgrade, existing integrations continue to run with the older versions of Fuse Online libraries and dependencies. To have them run with the updated versions, you must republish them.
Procedure
To republish your integrations, in your Fuse Online environment:
- In the Fuse Online left navigation panel, click Integrations.
For each integration:
-
To the right of the integration entry, click
and select Edit.
- When Fuse Online displays the integration for editing, in the upper right, click Publish.
-
To the right of the integration entry, click
Publishing forces a rebuild that uses the latest Fuse Online dependencies.
The Fuse Online user interface shows a warning if any element of an integration has a newer dependency that you need to update.
3.7. Uninstalling Fuse Online from an OCP project
You can uninstall Fuse Online from an OCP project without deleting the project nor anything else in that project. After uninstalling Fuse Online, integrations that are running continue to run but you can no longer edit or republish them.
Prerequisite
- You have an OCP project in which Fuse Online is installed.
- You exported any integrations that you might want to use in some other OpenShift project in which Fuse Online is installed. If necessary, see Export integrations.
Procedure
Log in to OpenShift with an account that has permission to install Fuse Online. For example:
oc login -u developer -p developer
Switch to the OpenShift project that is running the Fuse Online environment that you want to uninstall. For example:
oc project my-fuse-online-project
Delete Fuse Online infrastructure:
oc delete syndesis app
Delete
syndesis-operator DeploymentConfig
andImageStream
resources:oc delete deployment/syndesis-operator
oc delete is/syndesis-operator
3.8. Deleting an OCP project that contains Fuse Online
Deleting an OpenShift project in which Fuse Online is installed deletes everything in the project. This includes all integrations that have been defined as well as all integrations that are running.
Prerequisites
- You have an OCP project in which Fuse Online is installed.
- You exported any integrations that you might want to use in some other OpenShift project in which Fuse Online is installed. If necessary, see Exporting integrations.
Procedure
Invoke the oc delete project
command. For example, to delete an OpenShift project whose name is fuse-online-project
, enter the following command:
oc delete project fuse-online-project