Configuring dynamic plugins
Configuring dynamic plugins in Red Hat Developer Hub
Abstract
Chapter 1. Installing Ansible plug-ins for Red Hat Developer Hub Copy linkLink copied to clipboard!
Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources.
Additional resources
Chapter 2. Installing and configuring Argo CD Copy linkLink copied to clipboard!
You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps.
2.1. Enabling the Argo CD plugin Copy linkLink copied to clipboard!
The Argo CD plugin provides a visual overview of the application’s status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history.
Prerequisites
Add Argo CD instance information to your
app-config.yamlconfigmap as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAvoid using a trailing slash in the
url, as it might cause unexpected behavior.Add the following annotation to the entity’s
catalog-info.yamlfile to identify the Argo CD applications.annotations: ... # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: '${ARGOCD_LABEL_SELECTOR}'annotations: ... # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: '${ARGOCD_LABEL_SELECTOR}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) Add the following annotation to the entity’s
catalog-info.yamlfile to switch between Argo CD instances as shown in the following example:annotations: ... # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: '${ARGOCD_INSTANCE}'annotations: ... # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: '${ARGOCD_INSTANCE}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not set this annotation, the Argo CD plugin defaults to the first Argo CD instance configured in
app-config.yaml.
Procedure
Add the following to your dynamic-plugins ConfigMap to enable the Argo CD plugin.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Enabling Argo CD Rollouts Copy linkLink copied to clipboard!
The optional Argo CD Rollouts feature enhances Kubernetes by providing advanced deployment strategies, such as blue-green and canary deployments, for your applications. When integrated into the backstage Kubernetes plugin, it allows developers and operations teams to visualize and manage Argo CD Rollouts seamlessly within the Backstage interface.
Prerequisites
The Backstage Kubernetes plugin (
@backstage/plugin-kubernetes) is installed and configured.- To install and configure Kubernetes plugin in Backstage, see Installaltion and Configuration guide.
-
You have access to the Kubernetes cluster with the necessary permissions to create and manage custom resources and
ClusterRoles. -
The Kubernetes cluster has the
argoproj.iogroup resources (for example, Rollouts and AnalysisRuns) installed.
Procedure
In the
app-config.yamlfile in your Backstage instance, add the followingcustomResourcescomponent under thekubernetesconfiguration to enable Argo Rollouts and AnalysisRuns:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant
ClusterRolepermissions for custom resources.Note-
If the Backstage Kubernetes plugin is already configured, the
ClusterRolepermissions for Rollouts and AnalysisRuns might already be granted. -
Use the prepared manifest to provide read-only
ClusterRoleaccess to both the Kubernetes and ArgoCD plugins.
-
If the
ClusterRolepermission is not granted, use the following YAML manifest to create theClusterRole:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the manifest to the cluster using
kubectl:kubectl apply -f <your-clusterrole-file>.yaml
kubectl apply -f <your-clusterrole-file>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Ensure the
ServiceAccountaccessing the cluster has thisClusterRoleassigned.
-
If the Backstage Kubernetes plugin is already configured, the
Add annotations to
catalog-info.yamlto identify Kubernetes resources for Backstage.For identifying resources by entity ID:
annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) For identifying resources by namespace:
annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>
annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For using custom label selectors, which override resource identification by entity ID or namespace:
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure you specify the labels declared in
backstage.io/kubernetes-label-selectoron your Kubernetes resources. This annotation overrides entity-based or namespace-based identification annotations, such asbackstage.io/kubernetes-idandbackstage.io/kubernetes-namespace.
Add label to Kubernetes resources to enable Backstage to find the appropriate Kubernetes resources.
Backstage Kubernetes plugin label: Add this label to map resources to specific Backstage entities.
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow GitOps application mapping: Add this label to map Argo CD Rollouts to a specific GitOps application
labels: ... app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>
labels: ... app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NoteIf using the label selector annotation (backstage.io/kubernetes-label-selector), ensure the specified labels are present on the resources. The label selector will override other annotations like kubernetes-id or kubernetes-namespace.
Verification
- Push the updated configuration to your GitOps repository to trigger a rollout.
- Open Red Hat Developer Hub interface and navigate to the entity you configured.
- Select the CD tab and then select the GitOps application. The side panel opens.
In the Resources table of the side panel, verify that the following resources are displayed:
- Rollouts
- AnalysisRuns (optional)
Expand a rollout resource and review the following details:
- The Revisions row displays traffic distribution details for different rollout versions.
- The Analysis Runs row displays the status of analysis tasks that evaluate rollout success.
Chapter 3. Enabling and configuring the JFrog plugin Copy linkLink copied to clipboard!
JFrog Artifactory is a front-end plugin that displays the information about your container images stored in the JFrog Artifactory repository. The JFrog Artifactory plugin is preinstalled with Developer Hub and disabled by default. To use it, you need to enable and configure it first.
The JFrog Artifactory plugin is a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page.
3.1. Enabling the JFrog Artifactory plugin Copy linkLink copied to clipboard!
Procedure
The JFrog Artifactory plugin is preinstalled in Developer Hub with basic configuration properties. To enable it, set the disabled property to
falseas follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Configuring the JFrog Artifactory plugin Copy linkLink copied to clipboard!
Procedure
Set the proxy to the desired JFrog Artifactory server in the app-config.yaml file as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following annotation to the entity’s
catalog-info.yamlfile to enable the JFrog Artifactory plugin features in RHDH components:metadata: annotations: 'jfrog-artifactory/image-name': '<IMAGE-NAME>'metadata: annotations: 'jfrog-artifactory/image-name': '<IMAGE-NAME>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Enabling and configuring the Keycloak plugin Copy linkLink copied to clipboard!
The Keycloak backend plugin, which integrates Keycloak into Developer Hub, has the following capabilities:
- Synchronization of Keycloak users in a realm.
- Synchronization of Keycloak groups and their users in a realm.
The supported Red Hat Build of Keycloak (RHBK) version is 26.0.
4.1. Enabling the Keycloak plugin Copy linkLink copied to clipboard!
Prerequisites
To enable the Keycloak plugin, you must set the following environment variables:
-
KEYCLOAK_BASE_URL -
KEYCLOAK_LOGIN_REALM -
KEYCLOAK_REALM -
KEYCLOAK_CLIENT_ID -
KEYCLOAK_CLIENT_SECRET
-
Procedure
The Keycloak plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the
disabledproperty tofalseas follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Configuring the Keycloak plugin Copy linkLink copied to clipboard!
Procedure
To configure the Keycloak plugin, add the following in your
app-config.yamlfile:scheduleConfigure the schedule frequency, timeout, and initial delay. The fields support cron, ISO duration, "human duration" as used in code.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow userQuerySizeandgroupQuerySizeOptionally, configure the Keycloak query parameters to define the number of users and groups to query at a time. Default values are 100 for both fields.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Authentication
Communication between Developer Hub and Keycloak is enabled by using the Keycloak API. Username and password, or client credentials are supported authentication methods.
The following table describes the parameters that you can configure to enable the plugin under
catalog.providers.keycloakOrg.<ENVIRONMENT_NAME>object in theapp-config.yamlfile:Expand Name Description Default Value Required baseUrlLocation of the Keycloak server, such as
https://localhost:8443/auth.""
Yes
realmRealm to synchronize
masterNo
loginRealmRealm used to authenticate
masterNo
usernameUsername to authenticate
""
Yes if using password based authentication
passwordPassword to authenticate
""
Yes if using password based authentication
clientIdClient ID to authenticate
""
Yes if using client credentials based authentication
clientSecretClient Secret to authenticate
""
Yes if using client credentials based authentication
userQuerySizeNumber of users to query at a time
100No
groupQuerySizeNumber of groups to query at a time
100No
When using client credentials
-
Set the access type to
confidential. - Enable service accounts.
Add the following roles from the
realm-managementclient role:-
query-groups -
query-users -
view-users
-
-
Set the access type to
Optionally, if you have self-signed or corporate certificate issues, you can set the following environment variable before starting Developer Hub:
NODE_TLS_REJECT_UNAUTHORIZED=0
NODE_TLS_REJECT_UNAUTHORIZED=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningSetting the environment variable is not recommended.
4.3. Keycloack plugin metrics Copy linkLink copied to clipboard!
The Keycloak backend plugin supports OpenTelemetry metrics that you can use to monitor fetch operations and diagnose potential issues.
4.3.1. Available Counters Copy linkLink copied to clipboard!
| Metric Name | Description |
|---|---|
|
| Counts fetch task failures where no data was returned due to an error. |
|
| Counts partial data batch failures. Even if some batches fail, the plugin continues fetching others. |
4.3.2. Labels Copy linkLink copied to clipboard!
All counters include the taskInstanceId label, which uniquely identifies each scheduled fetch task. You can use this label to trace failures back to individual task executions.
Users can enter queries in the Prometheus UI or Grafana to explore and manipulate metric data.
In the following examples, a Prometheus Query Language (PromQL) expression returns the number of backend failures.
Example to get the number of backend failures associated with a taskInstanceId
backend_keycloak_fetch_data_batch_failure_count_total{taskInstanceId="df040f82-2e80-44bd-83b0-06a984ca05ba"} 1
backend_keycloak_fetch_data_batch_failure_count_total{taskInstanceId="df040f82-2e80-44bd-83b0-06a984ca05ba"} 1
Example to get the number of backend failures during the last hour
sum(backend_keycloak_fetch_data_batch_failure_count_total) - sum(backend_keycloak_fetch_data_batch_failure_count_total offset 1h)
sum(backend_keycloak_fetch_data_batch_failure_count_total) - sum(backend_keycloak_fetch_data_batch_failure_count_total offset 1h)
PromQL supports arithmetic operations, comparison operators, logical/set operations, aggregation, and various functions. Users can combine these features to analyze time-series data effectively.
Additionally, the results can be visualized using Grafana.
4.3.3. Exporting Metrics Copy linkLink copied to clipboard!
You can export metrics using any OpenTelemetry-compatible backend, such as Prometheus.
Additional resources
Chapter 5. Enabling and configuring the Nexus Repository Manager plugin Copy linkLink copied to clipboard!
The Nexus Repository Manager plugin displays the information about your build artifacts in your Developer Hub application. The build artifacts are available in the Nexus Repository Manager.
The Nexus Repository Manager plugin is a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page.
5.1. Enabling the Nexus Repository Manager plugin Copy linkLink copied to clipboard!
The Nexus Repository Manager plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows:
5.2. Configuring the Nexus Repository Manager plugin Copy linkLink copied to clipboard!
Set the proxy to the desired Nexus Repository Manager server in the
app-config.yamlfile as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Change the base URL of Nexus Repository Manager proxy as follows:
nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-pathnexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Enable the following experimental annotations:
nexusRepositoryManager: experimentalAnnotations: truenexusRepositoryManager: experimentalAnnotations: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Annotate your entity using the following annotations:
metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Enabling the Tekton plugin Copy linkLink copied to clipboard!
You can use the Tekton plugin to visualize the results of CI/CD pipeline runs on your Kubernetes or OpenShift clusters. The plugin allows users to visually see high level status of all associated tasks in the pipeline for their applications.
Prerequisites
-
You have installed and configured the
@backstage/plugin-kubernetesand@backstage/plugin-kubernetes-backenddynamic plugins. -
You have configured the Kubernetes plugin to connect to the cluster using a
ServiceAccount. The
ClusterRolemust be granted for custom resources (PipelineRuns and TaskRuns) to theServiceAccountaccessing the cluster.NoteIf you have the RHDH Kubernetes plugin configured, then the
ClusterRoleis already granted.-
To view the pod logs, you have granted permissions for
pods/log. You can use the following code to grant the
ClusterRolefor custom resources and pod logs:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the prepared manifest for a read-only
ClusterRole, which provides access for both Kubernetes plugin and Tekton plugin.Add the following annotation to the entity’s
catalog-info.yamlfile to identify whether an entity contains the Kubernetes resources:annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also add the
backstage.io/kubernetes-namespaceannotation to identify the Kubernetes resources using the defined namespace.annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS>
annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following annotation to the
catalog-info.yamlfile of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity:annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>
annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations.
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you use the label selector, the mentioned labels must be present on the resource.
Procedure
The Tekton plugin is pre-loaded in RHDH with basic configuration properties. To enable it, set the disabled property to false as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Installing the Topology plugin Copy linkLink copied to clipboard!
7.1. Installing the Topology plugin Copy linkLink copied to clipboard!
The Topology plugin enables you to visualize the workloads such as Deployment, Job, Daemonset, Statefulset, CronJob, Pods and Virtual Machines powering any service on your Kubernetes cluster.
Prerequisites
- You have installed and configured the @backstage/plugin-kubernetes-backend dynamic plugins.
- You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount.
The
ClusterRolemust be granted to ServiceAccount accessing the cluster.NoteIf you have the Developer Hub Kubernetes plugin configured, then the
ClusterRoleis already granted.
Procedure
The Topology plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows:
app-config.yamlfragmentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Configuring the Topology plugin Copy linkLink copied to clipboard!
7.2.1. Viewing OpenShift routes Copy linkLink copied to clipboard!
Procedure
To view OpenShift routes, grant read access to the routes resource in the Cluster Role:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Also add the following in
kubernetes.customResourcesproperty in yourapp-config.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.2. Viewing pod logs Copy linkLink copied to clipboard!
Procedure
To view pod logs, you must grant the following permission to the
ClusterRole:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3. Viewing Tekton PipelineRuns Copy linkLink copied to clipboard!
Procedure
To view the Tekton PipelineRuns, grant read access to the
pipelines,pipelinesruns, andtaskrunsresources in theClusterRole:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the Tekton PipelineRuns list in the side panel and the latest PipelineRuns status in the Topology node decorator, add the following code to the
kubernetes.customResourcesproperty in yourapp-config.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4. Viewing virtual machines Copy linkLink copied to clipboard!
Prerequisites
- The OpenShift Virtualization operator is installed and configured on a Kubernetes cluster. .Procedure
Grant read access to the
VirtualMachinesresource in theClusterRole:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To view the virtual machine nodes on the topology plugin, add the following code to the
kubernetes.customResourcesproperty in theapp-config.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.5. Enabling the source code editor Copy linkLink copied to clipboard!
To enable the source code editor, you must grant read access to the CheClusters resource in the ClusterRole as shown in the following example code:
To use the source code editor, you must add the following configuration to the kubernetes.customResources property in your app-config.yaml file:
7.3. Managing labels and annotations for Topology plugins Copy linkLink copied to clipboard!
7.3.1. Linking to the source code editor or the source Copy linkLink copied to clipboard!
Add the following annotations to workload resources, such as Deployments to navigate to the Git repository of the associated application using the source code editor:
annotations: app.openshift.io/vcs-uri: <GIT_REPO_URL>
annotations:
app.openshift.io/vcs-uri: <GIT_REPO_URL>
Add the following annotation to navigate to a specific branch:
annotations: app.openshift.io/vcs-ref: <GIT_REPO_BRANCH>
annotations:
app.openshift.io/vcs-ref: <GIT_REPO_BRANCH>
If Red Hat OpenShift Dev Spaces is installed and configured and Git URL annotations are also added to the workload YAML file, then clicking on the edit code decorator redirects you to the Red Hat OpenShift Dev Spaces instance.
When you deploy your application using the OCP Git import flows, then you do not need to add the labels as import flows do that. Otherwise, you need to add the labels manually to the workload YAML file.
You can also add the app.openshift.io/edit-link annotation with the edit URL that you want to access using the decorator.
7.3.2. Entity annotation/label Copy linkLink copied to clipboard!
For RHDH to detect that an entity has Kubernetes components, add the following annotation to the catalog-info.yaml file of the entity:
annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
annotations:
backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>`
labels:
backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>`
When using the label selector, the mentioned labels must be present on the resource.
7.3.3. Namespace annotation Copy linkLink copied to clipboard!
Procedure
To identify the Kubernetes resources using the defined namespace, add the
backstage.io/kubernetes-namespaceannotation:annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>
annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Red Hat OpenShift Dev Spaces instance is not accessible using the source code editor if the
backstage.io/kubernetes-namespaceannotation is added to thecatalog-info.yamlfile.To retrieve the instance URL, you require the CheCluster custom resource (CR). As the CheCluster CR is created in the openshift-devspaces namespace, the instance URL is not retrieved if the namespace annotation value is not openshift-devspaces.
7.3.4. Label selector query annotation Copy linkLink copied to clipboard!
You can write your own custom label, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations:
annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
annotations:
backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
If you have multiple entities while Red Hat Dev Spaces is configured and want multiple entities to support the edit code decorator that redirects to the Red Hat Dev Spaces instance, you can add the backstage.io/kubernetes-label-selector annotation to the catalog-info.yaml file for each entity.
annotations: backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'
annotations:
backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'
If you are using the previous label selector, you must add the following labels to your resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: component: che # add this label to your che cluster instance labels: component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity
labels:
component: che # add this label to your che cluster instance
labels:
component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity
You can also write your own custom query for the label selector with unique labels to differentiate your entities. However, you need to ensure that you add those labels to the resources associated with your entities including your CheCluster instance.
7.3.5. Displaying icon in the node Copy linkLink copied to clipboard!
To display a runtime icon in the topology nodes, add the following label to workload resources, such as Deployments:
labels: app.openshift.io/runtime: <RUNTIME_NAME>
labels:
app.openshift.io/runtime: <RUNTIME_NAME>
Alternatively, you can include the following label to display the runtime icon:
labels: app.kubernetes.io/name: <RUNTIME_NAME>
labels:
app.kubernetes.io/name: <RUNTIME_NAME>
Supported values of <RUNTIME_NAME> include:
- django
- dotnet
- drupal
- go-gopher
- golang
- grails
- jboss
- jruby
- js
- nginx
- nodejs
- openjdk
- perl
- phalcon
- php
- python
- quarkus
- rails
- redis
- rh-spring-boot
- rust
- java
- rh-openjdk
- ruby
- spring
- spring-boot
Other values result in icons not being rendered for the node.
7.3.6. App grouping Copy linkLink copied to clipboard!
To display workload resources such as deployments or pods in a visual group, add the following label:
labels: app.kubernetes.io/part-of: <GROUP_NAME>
labels:
app.kubernetes.io/part-of: <GROUP_NAME>
7.3.7. Node connector Copy linkLink copied to clipboard!
Procedure
To display the workload resources such as deployments or pods with a visual connector, add the following annotation:
annotations: app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]'annotations: app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information about the labels and annotations, see Guidelines for labels and annotations for OpenShift applications.
Chapter 8. Bulk importing in Red Hat Developer Hub Copy linkLink copied to clipboard!
These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
Red Hat Developer Hub can automate the onboarding of GitHub repositories and GitLab projects, and track their import status.
8.1. Enabling and authorizing Bulk Import capabilities in Red Hat Developer Hub Copy linkLink copied to clipboard!
You can enable the Bulk Import feature for users and give them the necessary permissions to access it. This feature is available for GitHub repositories and GitLab projects.
Prerequisites
- For GitHub only: You have enabled GitHub repository discovery.
Procedure
The Bulk Import plugins are installed but disabled by default. To enable the
./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamicand./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-importplugins, edit yourdynamic-plugins.yamlwith the following content:dynamic-plugins.yamlfragmentplugins: - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import disabled: falseplugins: - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import disabled: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow See Installing and viewing plugins in Red Hat Developer Hub.
Configure the required
bulk.importRBAC permission for the users who are not administrators as follows:rbac-policy.csvfragmentp, role:default/bulk-import, bulk.import, use, allow g, user:default/<your_user>, role:default/bulk-import
p, role:default/bulk-import, bulk.import, use, allow g, user:default/<your_user>, role:default/bulk-importCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that only Developer Hub administrators or users with the
bulk.importpermission can use the Bulk Import feature. See Permission policies in Red Hat Developer Hub.
Verification
- The sidebar displays a Bulk Import option.
- The Bulk Import page shows a list of added GitHub repositories and GitLab projects.
8.2. Importing multiple GitHub repositories Copy linkLink copied to clipboard!
In Red Hat Developer Hub, you can select your GitHub repositories and automate their onboarding to the Developer Hub catalog.
Prerequisites
Procedure
- Click Bulk Import in the left sidebar.
Click the Add button in the top-right corner to see the list of all repositories accessible from the configured GitHub integrations.
-
From the Repositories view, you can select any repository, or search for any accessible repositories. For each repository selected, a
catalog-info.yamlis generated. - From the Organizations view, you can select any organization by clicking Select in the third column. This option allows you to select one or more repositories from the selected organization.
-
From the Repositories view, you can select any repository, or search for any accessible repositories. For each repository selected, a
Click Preview file to view or edit the details of the pull request for each repository.
-
Review the pull request description and the
catalog-info.yamlfile content. -
Optional: when the repository has a
.github/CODEOWNERSfile, you can select the Use CODEOWNERS file as Entity Owner checkbox to use it, rather than having thecontent-info.yamlcontain a specific entity owner. - Click Save.
-
Review the pull request description and the
Click Create pull requests. At this point, a set of dry-run checks runs against the selected repositories to ensure they meet the requirements for import, such as:
-
Verifying that there is no entity in the Developer Hub catalog with the name specified in the repository
catalog-info.yaml - Verifying that the repository is not empty
Verifying that the repository contains a
.github/CODEOWNERSfile if the Use CODEOWNERS file as Entity Owner checkbox is selected for that repository- If any errors occur, the pull requests are not created, and you see a Failed to create PR error message detailing the issues. To view more details about the reasons, click Edit.
- If there are no errors, the pull requests are created, and you are redirected to the list of added repositories.
-
Verifying that there is no entity in the Developer Hub catalog with the name specified in the repository
-
Review and merge each pull request that creates a
catalog-info.ymlfile.
Verification
- The Added entities list displays the repositories you imported, each with an appropriate status: either Waiting for approval or Added.
-
For each Waiting for approval import job listed, there is a corresponding pull request adding the
catalog-info.yamlfile in the corresponding repository.
8.3. Importing multiple GitLab repositories Copy linkLink copied to clipboard!
In Red Hat Developer Hub, you can select your GitLab projects and automate their onboarding to the Developer Hub catalog. This feature is a Technology preview.
Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Subscription Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See: Technology Preview support scope.
Prerequisites
- You have include::modules/streamline-software-development-and-management/proc-enabling-and-authorizing-bulk-import-capabilities.adoc[enabled the Bulk Import feature and given access to it].
- You have set up a GitLab personal access token (PAT).
Procedure
- In RHDH, click Bulk Import
- Click Import.
- Select GitLab as your Approval tool option.
Use the Project and Group views to see the list of all available GitLab projects and groups:
- Use the Project view to select GitLab projects for importing.
- Use the Group view to select GitLab groups and their associated projects for importing.
-
In GitLab, review the automatically created "Ad
catalog-info.yamlfile" merge request for each project you selected for Bulk Import. - Merge the merge request.
Verification
- In RHDH, click Bulk Import
In the Imported entities list, each imported GitLab project has the appropriate status: either Waiting for approval or Added.
-
For each Waiting for approval import job listed, there is a corresponding merge request adding the
catalog-info.yamlfile in the corresponding project.
-
For each Waiting for approval import job listed, there is a corresponding merge request adding the
8.4. Monitoring Bulk Import actions using audit logs Copy linkLink copied to clipboard!
The Bulk Import backend plugin adds the following events to the Developer Hub audit logs. See Audit logs in Red Hat Developer Hub for more information on how to configure and view audit logs.
Bulk Import Events:
BulkImportUnknownEndpoint- Tracks requests to unknown endpoints.
BulkImportPing-
Tracks
GETrequests to the/pingendpoint, which allows us to make sure the bulk import backend is up and running. BulkImportFindAllOrganizations-
Tracks
GETrequests to the/organizationsendpoint, which returns the list of organizations accessible from all configured GitHub Integrations. BulkImportFindRepositoriesByOrganization-
Tracks
GETrequests to the/organizations/:orgName/repositoriesendpoint, which returns the list of repositories for the specified organization (accessible from any of the configured GitHub Integrations). BulkImportFindAllRepositories-
Tracks GET requests to the
/repositoriesendpoint, which returns the list of repositories accessible from all configured GitHub Integrations. BulkImportFindAllImports-
Tracks
GETrequests to the/importsendpoint, which returns the list of existing import jobs along with their statuses. BulkImportCreateImportJobs-
Tracks
POSTrequests to the/importsendpoint, which allows to submit requests to bulk-import one or many repositories into the Developer Hub catalog, by eventually creating import pull requests in the target repositories. BulkImportFindImportStatusByRepo-
Tracks
GETrequests to the/import/by-repoendpoint, which fetches details about the import job for the specified repository. BulkImportDeleteImportByRepo-
Tracks
DELETErequests to the/import/by-repoendpoint, which deletes any existing import job for the specified repository, by closing any open import pull request that could have been created.
Example bulk import audit logs
8.5. Input parameters for Bulk Import Scaffolder template Copy linkLink copied to clipboard!
As an administrator, you can use the Bulk Import plugin to run a Scaffolder template task with specified parameters, which you must define within the template.
The Bulk Import plugin analyzes Git repository information and provides the following parameters for the Scaffolder template task:
repoUrlNormalized repository URL in the following format:
${gitProviderHost}?owner=${owner}&repo=${repository-name}${gitProviderHost}?owner=${owner}&repo=${repository-name}Copy to Clipboard Copied! Toggle word wrap Toggle overflow name- The repository name.
organization- The repository owner, which can be a user nickname or organization name.
branchName-
The proposed repository branch. By default, the proposed repository branch is
bulk-import-catalog-entity. targetBranchName- The default branch of the Git repository.
gitProviderHost-
The Git provider host parsed from the repository URL. You can use this parameter to write
Git-provider-agnostictemplates.
Example of a Scaffolder template:
8.6. Setting up a custom Scaffolder workflow for Bulk Import Copy linkLink copied to clipboard!
As an administrator, you can create a custom Scaffolder template in line with the repository conventions of your organization and add the template into the Red Hat Developer Hub catalog for use by the Bulk Import plugin on multiple selected repositories.
You can define various custom tasks, including, but not limited to the following:
- Importing existing catalog entities from a repository
- Creating pull requests for cleanup
- Calling webhooks for external system integration
Prerequisites
- You created a custom Scaffolder template for the Bulk Import plugin.
You have run your RHDH instance with the following environment variable enabled to allow the use of the Scaffolder functionality:
export NODE_OPTIONS=--no-node-snapshot
export NODE_OPTIONS=--no-node-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Configure your app-config.yaml configuration to instruct the Bulk Import plugin to use your custom template as shown in the following example:
bulkImport: importTemplate: <your_template_entity_reference_or_template_name> importAPI: `open-pull-requests` | `scaffolder`;
bulkImport: importTemplate: <your_template_entity_reference_or_template_name> importAPI: `open-pull-requests` | `scaffolder`;Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
importTemplate:- Enter your Scaffolder template entity reference.
importAPI- Set the API to 'scaffolder' to trigger the defined workflow for high-fidelity automation. This field defines the import workflow and currently supports two following options:
open-pull-requests- This is the default import workflow, which includes the logic for creating pull requests for every selected repository.
scaffolderThis workflow uses an import scenario defined in the Scaffolder template to create import jobs. Select this option to use the custom import scenario defined in your Scaffolder template.
Optional: You can direct the Bulk Import plugin to hand off the entire list of selected repositories to a custom Orchestrator workflow.
ImportantThe Scaffolder template must be generic and not specific to a single repository if you want your custom Scaffolder template to run successfully for every repository in the bulk list.
Verification
-
The Bulk Import plugin runs the custom Scaffolder template for the list of repositories using the
/task-importsAPI endpoint.
8.7. Data handoff and custom workflow design Copy linkLink copied to clipboard!
When you configure the Bulk Import plugin by setting the importAPI field to scaffolder, the Bulk Import Backend passes all necessary context directly to the Scaffolder API.
As an administrator, you can define the Scaffolder template workflow and structure the workflow to do the following:
- Define template parameters to consume input
- Structure the Scaffolder template to receive the repository data as template parameters for the current workflow run. The template must be generic, and not specific to a single repository, so that it can successfully run for every repository in the bulk list.
- Automate processing for each repository
-
Implement the custom logic needed for a single repository within the template. The Orchestrator iterates through the repository list, launching the template once for each repository and passes only the data for that single repository to the template run. This allows you to automate tasks such as creating the
catalog-info.yaml, running compliance checks, or registering the entity with the catalog.
Chapter 9. ServiceNow Custom actions in Red Hat Developer Hub Copy linkLink copied to clipboard!
These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
In Red Hat Developer Hub, you can access ServiceNow custom actions (custom actions) for fetching and registering resources in the catalog.
The custom actions in Developer Hub enable you to facilitate and automate the management of records. Using the custom actions, you can perform the following actions:
- Create, update, or delete a record
- Retrieve information about a single record or multiple records
9.1. Enabling ServiceNow custom actions plugin in Red Hat Developer Hub Copy linkLink copied to clipboard!
In Red Hat Developer Hub, the ServiceNow custom actions are provided as a pre-loaded plugin, which is disabled by default. You can enable the custom actions plugin using the following procedure.
Prerequisites
- Red Hat Developer Hub is installed and running.
- You have created a project in the Developer Hub.
Procedure
To activate the custom actions plugin, add a
packagewith plugin name and update thedisabledfield in your Helm chart as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default configuration for a plugin is extracted from the
dynamic-plugins.default.yamlfile, however, you can use apluginConfigentry to override the default configuration.Set the following variables in the Helm chart to access the custom actions:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2. Supported ServiceNow custom actions in Red Hat Developer Hub Copy linkLink copied to clipboard!
The ServiceNow custom actions enable you to manage records in the Red Hat Developer Hub. The custom actions support the following HTTP methods for API requests:
-
GET: Retrieves specified information from a specified resource endpoint -
POST: Creates or updates a resource -
PUT: Modify a resource -
PATCH: Updates a resource DELETE: Deletes a resource- [GET] servicenow:now:table:retrieveRecord
Retrieves information of a specified record from a table in the Developer Hub.
Expand Table 9.1. Input parameters Name Type Requirement Description tableNamestringRequired
Name of the table to retrieve the record from
sysIdstringRequired
Unique identifier of the record to retrieve
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.Expand Table 9.2. Output parameters Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [GET] servicenow:now:table:retrieveRecords
Retrieves information about multiple records from a table in the Developer Hub.
Expand Table 9.3. Input parameters Name Type Requirement Description tableNamestringRequired
Name of the table to retrieve the records from
sysparamQuerystringOptional
Encoded query string used to filter the results
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmSuppressPaginationHeaderbooleanOptional
Set as
trueto suppress pagination header. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmLimitintOptional
Maximum number of results returned per page. The default value is
10,000.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryCategorystringOptional
Name of the query category to use for queries
sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.sysparmNoCountbooleanOptional
Does not execute a select count(*) on the table. The default value is
false.Expand Table 9.4. Output parameters Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [POST] servicenow:now:table:createRecord
Creates a record in a table in the Developer Hub.
Expand Table 9.5. Input parameters Name Type Requirement Description tableNamestringRequired
Name of the table to save the record in
requestBodyRecord<PropertyKey, unknown>Optional
Field name and associated value for each parameter to define in the specified record
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmInputDisplayValuebooleanOptional
Set field values using their display value such as
trueor actual value asfalse. The default value isfalse.sysparmSuppressAutoSysFieldbooleanOptional
Set as
trueto suppress auto-generation of system fields. The default value isfalse.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.Expand Table 9.6. Output parameters Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [PUT] servicenow:now:table:modifyRecord
Modifies a record in a table in the Developer Hub.
Expand Table 9.7. Input parameters Name Type Requirement Description tableNamestringRequired
Name of the table to modify the record from
sysIdstringRequired
Unique identifier of the record to modify
requestBodyRecord<PropertyKey, unknown>Optional
Field name and associated value for each parameter to define in the specified record
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmInputDisplayValuebooleanOptional
Set field values using their display value such as
trueor actual value asfalse. The default value isfalse.sysparmSuppressAutoSysFieldbooleanOptional
Set as
trueto suppress auto-generation of system fields. The default value isfalse.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.Expand Table 9.8. Output parameters Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [PATCH] servicenow:now:table:updateRecord
Updates a record in a table in the Developer Hub.
Expand Table 9.9. Input parameters Name Type Requirement Description tableNamestringRequired
Name of the table to update the record in
sysIdstringRequired
Unique identifier of the record to update
requestBodyRecord<PropertyKey, unknown>Optional
Field name and associated value for each parameter to define in the specified record
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmInputDisplayValuebooleanOptional
Set field values using their display value such as
trueor actual value asfalse. The default value isfalse.sysparmSuppressAutoSysFieldbooleanOptional
Set as
trueto suppress auto-generation of system fields. The default value isfalse.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.Expand Table 9.10. Output parameters Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [DELETE] servicenow:now:table:deleteRecord
Deletes a record from a table in the Developer Hub.
Expand Table 9.11. Input parameters Name Type Requirement Description tableNamestringRequired
Name of the table to delete the record from
sysIdstringRequired
Unique identifier of the record to delete
sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.
Chapter 10. Kubernetes custom actions in Red Hat Developer Hub Copy linkLink copied to clipboard!
With Kubernetes custom actions, you can create and manage Kubernetes resources.
The Kubernetes custom actions plugin is preinstalled and disabled on a Developer Hub instance by default. You can disable or enable the Kubernetes custom actions plugin, and change other parameters, by configuring the Red Hat Developer Hub Helm chart.
Kubernetes scaffolder actions and Kubernetes custom actions refer to the same concept throughout this documentation.
10.1. Enabling Kubernetes custom actions plugin in Red Hat Developer Hub Copy linkLink copied to clipboard!
In Red Hat Developer Hub, the Kubernetes custom actions are provided as a preinstalled plugin, which is disabled by default. You can enable the Kubernetes custom actions plugin by updating the disabled key value in your Helm chart.
Prerequisites
- You have installed Red Hat Developer Hub with the Helm chart.
Procedure
To enable the Kubernetes custom actions plugin, complete the following step:
In your Helm chart, add a
packagewith the Kubernetes custom action plugin name and update thedisabledfield. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default configuration for a plugin is extracted from the
dynamic-plugins.default.yamlfile, however, you can use apluginConfigentry to override the default configuration.
10.2. Using Kubernetes custom actions plugin in Red Hat Developer Hub Copy linkLink copied to clipboard!
In Red Hat Developer Hub, the Kubernetes custom actions enable you to run template actions for Kubernetes.
Procedure
To use a Kubernetes custom action in your custom template, add the following Kubernetes actions to your template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Creating a template using Kubernetes custom actions in Red Hat Developer Hub Copy linkLink copied to clipboard!
Procedure
To create a template, define a
Templateobject as a YAML file.The
Templateobject describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Supported Kubernetes custom actions in Red Hat Developer Hub Copy linkLink copied to clipboard!
In Red Hat Developer Hub, you can use custom Kubernetes actions in scaffolder templates.
Custom Kubernetes scaffolder actions
- Action: kubernetes:create-namespace
- Creates a namespace for the Kubernetes cluster in the Developer Hub.
| Parameter name | Type | Requirement | Description | Example |
|---|---|---|---|---|
|
|
| Required | Name of the Kubernetes namespace |
|
|
|
|
Required only if | Cluster resource entity reference from the catalog |
|
|
|
|
Required only if | API url of the Kubernetes cluster |
|
|
|
| Required | Kubernetes API bearer token used for authentication | |
|
|
| Optional | If true, certificate verification is skipped | false |
|
|
| Optional | Base64 encoded certificate data | |
|
|
| Optional | Labels applied to the namespace | app.io/type=ns; app.io/managed-by=org; |
Chapter 11. Configuring Red Hat Developer Hub Events Module Copy linkLink copied to clipboard!
Use the Events Module together with scheduled updates to make sure your GitHub user or catalog entities are updated whenever changes occur in the external system. This is a Developer Preview feature.
Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
11.1. Configuring Events Module for GitHub Copy linkLink copied to clipboard!
Learn how to configure Events Module for use with the RHDH GitHub Discovery feature and GitHub organization data. This is a Developer Preview feature.
Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
Prerequisites
-
You have added your GitHub integration credentials in the
app-config.yamlfile. -
You have defined the
schedule.frequencyin theapp-config.yamlfile as longer time period, such as 24 hours. - For GitHub Discovery only: You have enabled GitHub Discovery.
- For GitHub Organizational Data only: You have enabled Github Authentication with user ingestion.
Procedure
Add the GitHub Events Module to your
dynamic-plugins.yamlconfiguration file as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create HTTP endpoints to receive events for the
github, add the following to yourapp-config.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantSecure your workflow by adding a webhook secret token to validate webhook deliveries.
Create a GitHub webhook with the following specifications:
- For Github Discovery Events: push, repository
- For Github Organizational Data Events: organization, team and membership
- Content Type: application/json
Payload URL:
https://<my_developer_hub_domain>/api/events/http/githubNotePayload URL is the URL exposed after configuring the HTTP endpoint.
Verification
Check the log for an entry that confirms that http endpoint was set up successfully to receive events from the GitHub webhook.
- Example of a log of successfully set up http endpoint
{"level":"\u001b[32minfo\u001b[39m","message":"Registered /api/events/http/github to receive events","plugin":"events","service":"backstage","timestamp":"2025-11-03 02:19:12"}{"level":"\u001b[32minfo\u001b[39m","message":"Registered /api/events/http/github to receive events","plugin":"events","service":"backstage","timestamp":"2025-11-03 02:19:12"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For GitHub Discovery only:
Trigger a GitHub push event by adding, modifying or deleting the
catalog-info.yamlfile in the repository where you set up your webhook. A record of this event should appear in the pod logs of your RHDH instance.- Example of a log with changes to
catalog-info.yamlfile {"level":"\u001b[32minfo\u001b[39m","message":"Processed Github push event: added 0 - removed 0 - modified 1","plugin":"catalog","service":"backstage","span_id":"47534b96c4afc654","target":"github-provider:providerId","timestamp":"2025-06-15 21:33:14","trace_flags":"01","trace_id":"ecc782deb86aed2027da0ae6b1999e5c"}{"level":"\u001b[32minfo\u001b[39m","message":"Processed Github push event: added 0 - removed 0 - modified 1","plugin":"catalog","service":"backstage","span_id":"47534b96c4afc654","target":"github-provider:providerId","timestamp":"2025-06-15 21:33:14","trace_flags":"01","trace_id":"ecc782deb86aed2027da0ae6b1999e5c"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Example of a log with changes to
For GitHub Organizational Data only:
- Newly added users and teams appear in the RHDH catalog.
Chapter 12. Overriding Core Backend Service Configuration Copy linkLink copied to clipboard!
The Red Hat Developer Hub (RHDH) backend platform consists of a number of core services that are well encapsulated. The RHDH backend installs these default core services statically during initialization.
Customize a core service by installing it as a BackendFeature by using the dynamic plugin functionality.
Procedure
Configure Developer Hub to allow a core service override, by setting the corresponding core service ID environment variable to
truein the Developer Hubapp-config.yamlconfiguration file.Expand Table 12.1. Environment variables and core service IDs Variable Overrides the related service ENABLE_CORE_AUTH_OVERRIDEcore.authENABLE_CORE_CACHE_OVERRIDEcore.cacheENABLE_CORE_ROOTCONFIG_OVERRIDEcore.rootConfigENABLE_CORE_DATABASE_OVERRIDEcore.databaseENABLE_CORE_DISCOVERY_OVERRIDEcore.discoveryENABLE_CORE_HTTPAUTH_OVERRIDEcore.httpAuthENABLE_CORE_HTTPROUTER_OVERRIDEcore.httpRouterENABLE_CORE_LIFECYCLE_OVERRIDEcore.lifecycleENABLE_CORE_LOGGER_OVERRIDEcore.loggerENABLE_CORE_PERMISSIONS_OVERRIDEcore.permissionsENABLE_CORE_ROOTHEALTH_OVERRIDEcore.rootHealthENABLE_CORE_ROOTHTTPROUTER_OVERRIDEcore.rootHttpRouterENABLE_CORE_ROOTLIFECYCLE_OVERRIDEcore.rootLifecycleENABLE_CORE_SCHEDULER_OVERRIDEcore.schedulerENABLE_CORE_USERINFO_OVERRIDEcore.userInfoENABLE_CORE_URLREADER_OVERRIDEcore.urlReaderENABLE_EVENTS_SERVICE_OVERRIDEevents.service-
Install your custom core service as a
BackendFeatureas shown in the following example:
Example of a BackendFeature middleware function to handle incoming HTTP requests
+ In the previous example, as the BackendFeature overrides the default implementation of the HTTP router service, you must set the ENABLE_CORE_ROOTHTTPROUTER_OVERRIDE environment variable to true so that the Developer Hub does not install the default implementation automatically.