Configuring Red Hat Developer Hub
Add custom config maps and secrets to configure your Red Hat Developer Hub instance to work in your IT ecosystem.
Abstract
Preface Copy linkLink copied to clipboard!
Learn how to configure Red Hat Developer Hub for production to work in your IT ecosystem by adding custom config maps and secrets.
Chapter 1. Provisioning and using your custom Red Hat Developer Hub configuration Copy linkLink copied to clipboard!
To configure Red Hat Developer Hub, use these methods, which are widely used to configure a Red Hat OpenShift Container Platform application:
- Use config maps to mount files and directories.
- Use secrets to inject environment variables.
Learn to apply these methods to Developer Hub:
- Provision your custom config maps and secrets to OpenShift Container Platform.
Use your selected deployment method to mount the config maps and inject the secrets:
1.1. Provisioning your custom Red Hat Developer Hub configuration Copy linkLink copied to clipboard!
To configure Red Hat Developer Hub, provision your custom Red Hat Developer Hub config maps and secrets to Red Hat OpenShift Container Platform before running Red Hat Developer Hub.
You can skip this step to run Developer Hub with the default config map and secret. Your changes on this configuration might get reverted on Developer Hub restart.
Prerequisites
-
By using the OpenShift CLI (
oc), you have access, with developer permissions, to the OpenShift Container Platform cluster aimed at containing your Developer Hub instance.
Procedure
Author your custom
<my_product_secrets>.txtfile to provision your secrets as environment variables values in an OpenShift Container Platform secret, rather than in clear text in your configuration files. It contains one secret per line inKEY=valueform.Author your custom
app-config.yamlfile. This is the main Developer Hub configuration file.The
baseUrlfield is mandatory in yourapp-config.yamlfile to ensure proper functionality of Developer Hub. You must specify thebaseUrlin both theappandbackendsections to avoid errors during initialization.Example 1.1. Configuring the
baseUrlinapp-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, enter your configuration such as:
Provision your custom configuration files to your OpenShift Container Platform cluster.
Create the <my-rhdh-project> project aimed at containing your Developer Hub instance.
oc create namespace my-rhdh-project
$ oc create namespace my-rhdh-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, create the project by using the web console.
Provision your
app-config.yamlfile to themy-rhdh-app-configconfig map in the <my-rhdh-project> project.oc create configmap my-rhdh-app-config --from-file=app-config.yaml --namespace=my-rhdh-project
$ oc create configmap my-rhdh-app-config --from-file=app-config.yaml --namespace=my-rhdh-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, create the config map by using the web console.
Provision your
<my_product_secrets>.txtfile to the<my_product_secrets>secret in the <my-rhdh-project> project.oc create secret generic <my_product_secrets> --from-file=<my_product_secrets>.txt --namespace=my-rhdh-project
$ oc create secret generic <my_product_secrets> --from-file=<my_product_secrets>.txt --namespace=my-rhdh-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, create the secret by using the web console.
<my_product_secrets> is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.
Next steps
Consider provisioning additional config maps and secrets:
- To use an external PostgreSQL database, provision your PostgreSQL database secrets.
- To enable dynamic plugins, provision your dynamic plugins config map.
- To configure authorization by using external files, provision your RBAC policies config map.
1.2. Using the Red Hat Developer Hub Operator to run Developer Hub with your custom configuration Copy linkLink copied to clipboard!
To use the Developer Hub Operator to run Red Hat Developer Hub with your custom configuration, create your Backstage custom resource (CR) that:
- Mounts files provisioned in your custom config maps.
- Injects environment variables provisioned in your custom secrets.
Prerequisites
-
By using the OpenShift CLI (
oc), you have access, with developer permissions, to the OpenShift Container Platform cluster aimed at containing your Developer Hub instance. - Your OpenShift Container Platform administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform.
-
You have provisioned your custom config maps and secrets in your
<my-rhdh-project>project.
Procedure
Author your Backstage CR in a
my-rhdh-custom-resource.yamlfile to use your custom config maps and secrets.Example 1.2. Minimal
my-rhdh-custom-resource.yamlcustom resource exampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 1.3.
my-rhdh-custom-resource.yamlcustom resource example with dynamic plugins and RBAC policies config maps, and external PostgreSQL database secrets.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Mandatory fields
- No fields are mandatory. You can create an empty Backstage CR and run Developer Hub with the default configuration.
- Optional fields
spec.application.appConfig.configMaps- Enter your config map name list.
.Mount files in the
my-rhdh-app-configconfig map.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 1.4. Mount files in the
my-rhdh-app-configandrbac-policiesconfig maps.Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec.application.extraEnvs.envsOptionally, enter your additional environment variables that are not secrets, such as your proxy environment variables.
Example 1.5. Inject your
HTTP_PROXY,HTTPS_PROXYandNO_PROXYenvironment variables.Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec.application.extraEnvs.secretsEnter your environment variables secret name list.
Example 1.6. Inject the environment variables in your Red Hat Developer Hub secret
spec: application: extraEnvs: secrets: - name: <my_product_secrets>spec: application: extraEnvs: secrets: - name: <my_product_secrets>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 1.7. Inject the environment variables in the Red Hat Developer Hub and
my-rhdh-database-secretssecretsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
<my_product_secrets> is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.
spec.application.extraFiles.secretsEnter your certificates files secret name and files list.
Mount the
postgres-crt.pem,postgres-ca.pem, andpostgres-key.keyfiles contained in themy-rhdh-database-certificates-secretssecretCopy to Clipboard Copied! Toggle word wrap Toggle overflow spec.database.enableLocalDbEnable or disable the local PostgreSQL database.
Disable the local PostgreSQL database generation to use an external postgreSQL database
spec: database: enableLocalDb: falsespec: database: enableLocalDb: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow On a development environment, use the local PostgreSQL database
spec: database: enableLocalDb: truespec: database: enableLocalDb: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow spec.deploymentOptionally, enter your deployment configuration.
Apply your Backstage CR to start or update your Developer Hub instance.
oc apply --filename=my-rhdh-custom-resource.yaml --namespace=my-rhdh-project
$ oc apply --filename=my-rhdh-custom-resource.yaml --namespace=my-rhdh-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.1. Mounting additional files in your custom configuration using the Red Hat Developer Hub Operator Copy linkLink copied to clipboard!
You can use the Developer Hub Operator to mount extra files, such as a ConfigMap or Secret, to the container in a preferred location.
The mountPath field specifies the location where a ConfigMap or Secret is mounted. The behavior of the mount, whether it includes or excludes a subPath, depends on the specification of the key or mountPath fields.
-
If
keyandmountPathare not specified: Each key or value is mounted as afilenameor content with asubPath. -
If
keyis specified with or withoutmountPath: The specified key or value is mounted with asubPath. -
If only
mountPathis specified: A directory containing all the keys or values is mounted without asubPath.
-
OpenShift Container Platform does not automatically update a volume mounted with
subPath. By default, the RHDH Operator monitors these ConfigMaps or Secrets and refreshes the RHDH Pod when changes occur. - For security purposes, Red Hat Developer Hub does not give the Operator Service Account read access to Secrets. As a result, mounting files from Secrets without specifying both mountPath and key is not supported.
Prerequisites
-
You have developer permissions to access the OpenShift Container Platform cluster containing your Developer Hub instance using the OpenShift CLI (
oc). - Your OpenShift Container Platform administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform.
Procedure
In OpenShift Container Platform, create your ConfigMap or Secret with the following YAML codes:
Example 1.8. Minimal
my-project-configmapConfigMap exampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 1.9. Minimal Red Hat Developer Hub Secret example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Provisioning and using your custom Red Hat Developer Hub configuration.
Set the value of the
configMaps nameto the name of the ConfigMap orsecrets nameto the name of the Secret in yourBackstageCR. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
<my_product_secrets> is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.
1.3. Using the Red Hat Developer Hub Helm chart to run Developer Hub with your custom configuration Copy linkLink copied to clipboard!
You can use the Red Hat Developer Hub Helm chart to add a custom application configuration file to your OpenShift Container Platform instance.
Prerequisites
- By using the OpenShift Container Platform web console, you have access with developer permissions, to an OpenShift Container Platform project named <my-rhdh-project>, aimed at containing your Developer Hub instance.
-
You have uploaded your custom configuration files and secrets in your
<my-rhdh-project>project.
Procedure
Configure Helm to use your custom configuration files in Developer Hub.
- Go to the Helm tab to see the list of Helm releases.
- Click the overflow menu on the Helm release that you want to use and select Upgrade.
- Use the YAML view to edit the Helm configuration.
Set the value of the
upstream.backstage.extraAppConfig.configMapRefandupstream.backstage.extraAppConfig.filenameparameters as follows:Helm configuration excerpt
upstream: backstage: extraAppConfig: - configMapRef: my-rhdh-app-config filename: app-config.yamlupstream: backstage: extraAppConfig: - configMapRef: my-rhdh-app-config filename: app-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Upgrade.
Next steps
- Install Developer Hub by using Helm.
Chapter 2. Configuring external PostgreSQL databases Copy linkLink copied to clipboard!
As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart.
Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases.
By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead.
2.1. Configuring an external PostgreSQL instance using the Operator Copy linkLink copied to clipboard!
You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port: Denotes your PostgreSQL instance port number, such as5432 -
username: Denotes the user name to connect to your PostgreSQL instance -
password: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the Red Hat Developer Hub Operator.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a credential secret to connect with the PostgreSQL instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Create your
Backstagecustom resource (CR):Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe environment variables listed in the
BackstageCR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure theBackstageCR accordingly.-
Apply the
BackstageCR to the namespace where you have deployed the Developer Hub instance.
2.2. Configuring an external PostgreSQL instance using the Helm Chart Copy linkLink copied to clipboard!
You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port: Denotes your PostgreSQL instance port number, such as5432 -
username: Denotes the user name to connect to your PostgreSQL instance -
password: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the RHDH application by using the Helm Chart.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a credential secret to connect with the PostgreSQL instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Configure your PostgreSQL instance in the Helm configuration file named
values.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the value of the
upstream.postgresql.enabledparameter tofalseto disable creating local PostgreSQL instances. - 2
- Provide the name of the credential secret.
- 3
- Provide the name of the credential secret.
- 4
- Optional: Provide the name of the TLS certificate only for a TLS connection.
- 5
- Optional: Provide the name of the CA certificate only for a TLS connection.
- 6
- Optional: Provide the name of the TLS private key only if your TLS connection requires a private key.
- 7
- Provide the name of the certificate secret if you have configured a TLS connection.
Apply the configuration changes in your Helm configuration file named
values.yaml:helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.5.3
helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.5.3Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Migrating local databases to an external database server using the Operator Copy linkLink copied to clipboard!
By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump with psql or pgAdmin.
The following procedure uses a database copy script to do a quick migration.
Prerequisites
Procedure
Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:
oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>
oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
The
<pgsql-pod-name>variable denotes the name of a PostgreSQL pod with the formatbackstage-psql-<deployment-name>-<_index>. -
The
<forward-to-port>variable denotes the port of your choice to forward PostgreSQL data to. The
<forward-from-port>variable denotes the local PostgreSQL instance port, such as5432.Example: Configuring port forwarding
oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432
oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The
Make a copy of the following
db_copy.shscript and edit the details based on your configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The destination host name, for example,
<db-instance-name>.rds.amazonaws.com. - 2
- The destination port, such as
5432. - 3
- The destination server username, for example,
postgres. - 4
- The source host name, such as
127.0.0.1. - 5
- The source port number, such as the
<forward-to-port>variable. - 6
- The source server username, for example,
postgres. - 7
- The name of databases to import in double quotes separated by spaces, for example,
("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search").
Create a destination database for copying the data:
/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh
/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
<destination-db-password>variable denotes the password to connect to the destination database.
NoteYou can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.
-
Reconfigure your
Backstagecustom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator. Check that the following code is present at the end of your
BackstageCR after reconfiguration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteReconfiguring the
BackstageCR deletes the correspondingStatefulSetandPodobjects, but does not delete thePersistenceVolumeClaimobject. Use the following command to delete the localPersistenceVolumeClaimobject:oc -n developer-hub delete pvc <local-psql-pvc-name>
oc -n developer-hub delete pvc <local-psql-pvc-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, the
<local-psql-pvc-name>variable is in thedata-<psql-pod-name>format.- Apply the configuration changes.
Verification
Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:
oc get pods -n <your-namespace>
oc get pods -n <your-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the output for the following details:
-
The
backstage-developer-hub-xxxpod is in running state. The
backstage-psql-developer-hub-0pod is not available.You can also verify these details using the Topology view in the OpenShift Container Platform web console.
-
The
Chapter 3. Configuring Red Hat Developer Hub deployment when using the Operator Copy linkLink copied to clipboard!
The Red Hat Developer Hub Operator exposes a rhdh.redhat.com/v1alpha3 API Version of its custom resource (CR). This CR exposes a generic spec.deployment.patch field, which gives you full control over the Developer Hub Deployment resource. This field can be a fragment of the standard apps.Deployment Kubernetes object.
Procedure
-
Create a
BackstageCR with the following fields:
Example
labelsAdd labels to the Developer Hub pod.
Example adding the label
my=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow volumes
Add an additional volume named my-volume and mount it under /my/path in the Developer Hub application container.
Example additional volume
Replace the default dynamic-plugins-root volume with a persistent volume claim (PVC) named dynamic-plugins-root. Note the $patch: replace directive, otherwise a new volume will be added.
Example dynamic-plugins-root volume replacement
cpurequestSet the CPU request for the Developer Hub application container to 250m.
Example CPU request
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-sidecarcontainerAdd a new
my-sidecarsidecar container into the Developer Hub Pod.Example side car container
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Configuring readOnlyRootFilesystem in Red Hat Developer Hub Copy linkLink copied to clipboard!
The Red Hat Developer Hub deployment consists of two containers: an initContainer that installs the Dynamic Plugins, and a backend container that runs the application. The initContainer has the readOnlyRootFilesystem option enabled by default. To enable this option on the backend container, you must either have permission to deploy resources through Helm or to create or update a CR for Operator-backed deployments. You can manually configure the readOnlyRootFilesystem option on the backend container by using the following methods:
- The Red Hat Developer Hub Operator
- The Red Hat Developer Hub Helm chart
4.1. Configuring the readOnlyRootFilesystem option in a Red Hat Developer Hub Operator deployment Copy linkLink copied to clipboard!
When you are deploying Developer Hub using the Operator, you must specify a patch for the deployment in your Backstage custom resource (CR) that applies the readOnlyRootFilesystem option to the securityContext section in the Developer Hub backend container.
Procedure
In your
BackstageCR, add thesecurityContextspecification. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the main container defined in the Operator default configuration.
4.2. Configuring the readOnlyRootFilesystem option in a Red Hat Developer Hub Helm chart deployment Copy linkLink copied to clipboard!
Procedure
In your
values.yamlfile, add thereadOnlyRootFilesystem: trueline to thecontainerSecurityContextsection. For example:upstream: backstage: containerSecurityContext: readOnlyRootFilesystem: trueupstream: backstage: containerSecurityContext: readOnlyRootFilesystem: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Configuring high availability in Red Hat Developer Hub Copy linkLink copied to clipboard!
Previously, Red Hat Developer Hub supports a single-instance application. With this configuration, if the instance fails due to software crashes, hardware issues, or other unexpected disruptions, the entire Red Hat Developer Hub service becomes unavailable, preventing the development workflows or access to the resources. With high availability, you receive a failover mechanism that ensures the service is available even if one or more components fail. By increasing the number of replicas, you introduce redundancy to help increase higher productivity and minimize disruption.
As an administrator, you can configure high availability in Red Hat Developer Hub. Once you set the high availability option in Developer Hub, the Red Hat OpenShift Container Platform built-in Load Balancer manages the ingress traffic and distributes the load to each pod. The RHDH backend also manages concurrent requests or conflicts on the same resource.
You can configure high availability in Developer Hub by scaling your replicas to a number greater than 1 in your configuration file. The configuration file that you use depends on the method that you used to install your Developer Hub instance. If you used the Operator to install your Developer Hub instance, configure the replica values in your Backstage custom resource. If you used the Helm chart to install your Developer Hub instance, configure the replica values in your Helm chart.
5.1. Configuring High availability in a Red Hat Developer Hub Operator deployment Copy linkLink copied to clipboard!
RHDH instances that are deployed with the Operator use configurations in the Backstage custom resource. In the Backstage custom resource, the default value for the replicas field is 1. If you want to configure your RHDH instance for high availability, you must set replicas to a value greater than 1.
Procedure
In your
Backstagecustom resource, setreplicasto a value greater than1. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the number of replicas based on the number of backup instances that you want to configure.
5.2. Configuring high availability in a Red Hat Developer Hub Helm chart deployment Copy linkLink copied to clipboard!
When you are deploying Developer Hub using the Helm chart, you must set replicas to a value greater than 1 in your Helm chart. The default value for replicas is 1.
Procedure
To configure your Developer Hub Helm chart for high availability, complete the following step:
In your Helm chart configuration file, set
replicasto a value greater than1. For example:upstream: backstage: replicas: <replicas_value>upstream: backstage: replicas: <replicas_value>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the number of replicas based on the number of backup instances that you want to configure.
Chapter 6. Running Red Hat Developer Hub behind a corporate proxy Copy linkLink copied to clipboard!
In a network restricted environment, configure Red Hat Developer Hub to use your proxy to access remote network resources.
You can run the Developer Hub application behind a corporate proxy by setting any of the following environment variables before starting the application:
HTTP_PROXY- Denotes the proxy to use for HTTP requests.
HTTPS_PROXY- Denotes the proxy to use for HTTPS requests.
NO_PROXY- Set the environment variable to bypass the proxy for certain domains. The variable value is a comma-separated list of hostnames or IP addresses that can be accessed without the proxy, even if one is specified.
6.1. Understanding the NO_PROXY exclusion rules Copy linkLink copied to clipboard!
NO_PROXY is a comma or space-separated list of hostnames or IP addresses, with optional port numbers. If the input URL matches any of the entries listed in NO_PROXY, a direct request fetches that URL, for example, bypassing the proxy settings.
The default value for NO_PROXY in RHDH is localhost,127.0.0.1. If you want to override it, include at least localhost or localhost:7007 in the list. Otherwise, the RHDH backend might fail.
Matching follows the rules below:
-
NO_PROXY=*will bypass the proxy for all requests. -
Space and commas might separate the entries in the
NO_PROXYlist. For example,NO_PROXY="localhost,example.com", orNO_PROXY="localhost example.com", orNO_PROXY="localhost, example.com"would have the same effect. -
If
NO_PROXYcontains no entries, configuring theHTTP(S)_PROXYsettings makes the backend send all requests through the proxy. -
The backend does not perform a DNS lookup to determine if a request should bypass the proxy or not. For example, if DNS resolves
example.comto1.2.3.4, settingNO_PROXY=1.2.3.4has no effect on requests sent toexample.com. Only requests sent to the IP address1.2.3.4bypass the proxy. -
If you add a port after the hostname or IP address, the request must match both the host/IP and port to bypass the proxy. For example,
NO_PROXY=example.com:1234would bypass the proxy for requests tohttp(s)://example.com:1234, but not for requests on other ports, likehttp(s)://example.com. -
If you do not specify a port after the hostname or IP address, all requests to that host/IP address will bypass the proxy regardless of the port. For example,
NO_PROXY=localhostwould bypass the proxy for requests sent to URLs likehttp(s)://localhost:7077andhttp(s)://localhost:8888. -
IP Address blocks in CIDR notation will not work. So setting
NO_PROXY=10.11.0.0/16will not have any effect, even if the backend sends a request to an IP address in that block. -
Supports only IPv4 addresses. IPv6 addresses like
::1will not work. -
Generally, the proxy is only bypassed if the hostname is an exact match for an entry in the
NO_PROXYlist. The only exceptions are entries that start with a dot (.) or with a wildcard (*). In such a case, bypass the proxy if the hostname ends with the entry.
List the domain and the wildcard domain if you want to exclude a given domain and all its subdomains. For example, you would set NO_PROXY=example.com,.example.com to bypass the proxy for requests sent to http(s)://example.com and http(s)://subdomain.example.com.
6.2. Configuring proxy information in Operator deployment Copy linkLink copied to clipboard!
For Operator-based deployment, the approach you use for proxy configuration is based on your role:
- As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.
- As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Perform one of the following steps based on your role:
As an administrator, set the proxy information in the Operator’s default ConfigMap file:
-
Search for a ConfigMap file named
backstage-default-configin the default namespacerhdh-operatorand open it. -
Find the
deployment.yamlkey. Set the value of the
HTTP_PROXY,HTTPS_PROXY, andNO_PROXYenvironment variables in theDeploymentspec as shown in the following example:Example: Setting proxy variables in a ConfigMap file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Search for a ConfigMap file named
As a developer, set the proxy information in your
BackstageCR file as shown in the following example:Example: Setting proxy variables in a CR file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Save the configuration changes.
6.3. Configuring proxy information in Helm deployment Copy linkLink copied to clipboard!
For Helm-based deployment, either a developer or a cluster administrator with permissions to create resources in the cluster can configure the proxy variables in a values.yaml Helm configuration file.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Set the proxy information in your Helm configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where,
<http_proxy_url>- Denotes a variable that you must replace with the HTTP proxy URL.
<https_proxy_url>- Denotes a variable that you must replace with the HTTPS proxy URL.
<no_proxy_settings>Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example,
foo.com,baz.com.Example: Setting proxy variables using Helm Chart
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Save the configuration changes.
Chapter 7. Configuring an RHDH instance with a TLS connection in Kubernetes Copy linkLink copied to clipboard!
You can configure a RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. Transport Layer Security (TLS) ensures a secure connection for the RHDH instance with other entities, such as third-party applications, or external databases. However, you must use a public Certificate Authority (CA)-signed certificate to configure your Kubernetes cluster.
Prerequisites
- You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.
You have created a namespace and setup a service account with proper read permissions on resources.
Example: Kubernetes manifest for role-based access control
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You have obtained the secret and the service CA certificate associated with your service account.
You have created some resources and added annotations to them so they can be discovered by the Kubernetes plugin. You can apply these Kubernetes annotations:
-
backstage.io/kubernetes-idto label components -
backstage.io/kubernetes-namespaceto label namespaces
-
Procedure
Enable the Kubernetes plugins in the
dynamic-plugins-rhdh.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
backstage-plugin-kubernetesplugin is currently in Technology Preview. As an alternative, you can use the./dynamic-plugins/dist/backstage-plugin-topology-dynamicplugin, which is Generally Available (GA).Set the Kubernetes cluster details and configure the catalog sync options in the
app-config.yamlconfiguration file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The base URL to the Kubernetes control plane. You can run the
kubectl cluster-infocommand to get the base URL. - 2
- Set the value of this parameter to
falseto enable the verification of the TLS certificate. - 3
- Optional: The link to the Kubernetes dashboard managing the ARO cluster.
- 4
- Optional: Pass the service account token using a
K8S_SERVICE_ACCOUNT_TOKENenvironment variable that you define in your<my_product_secrets>secret. - 5
- Pass the CA data using a
K8S_CONFIG_CA_DATAenvironment variable that you define in your<my_product_secrets>secret.
- Save the configuration changes.
Verification
Run the RHDH application to import your catalog:
kubectl -n rhdh-operator get pods -w
kubectl -n rhdh-operator get pods -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the pod log shows no errors for your configuration.
- Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.
If you encounter connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.
Chapter 8. Using the dynamic plugins cache Copy linkLink copied to clipboard!
The dynamic plugins cache in Red Hat Developer Hub (RHDH) enhances the installation process and reduces platform boot time by storing previously installed plugins. If the configuration remains unchanged, this feature prevents the need to re-download plugins on subsequent boots.
When you enable dynamic plugins cache:
-
The system calculates a checksum of each plugin’s YAML configuration (excluding
pluginConfig). -
The checksum is stored in a file named
dynamic-plugin-config.hashwithin the plugin’s directory. - During boot, if a plugin’s package reference matches the previous installation and the checksum is unchanged, the download is skipped.
- Plugins that are disabled since the previous boot are automatically removed.
8.1. Enabling the dynamic plugins cache Copy linkLink copied to clipboard!
To enable the dynamic plugins cache in RHDH, the plugins directory dynamic-plugins-root must be a persistent volume.
8.1.1. Creating a PVC for the dynamic plugin cache by using the Operator Copy linkLink copied to clipboard!
For operator-based installations, you must manually create the persistent volume claim (PVC) by replacing the default dynamic-plugins-root volume with a PVC named dynamic-plugins-root.
Procedure
Create the persistent volume definition and save it to a file, such as
pvc.yaml. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example uses
ReadWriteOnceas the access mode which prevents multiple replicas from sharing the PVC across different nodes.To run multiple replicas on different nodes, depending on your storage driver, you must use an access mode such as
ReadWriteMany.To apply this PVC to your cluster, run the following command:
oc apply -f pvc.yaml
oc apply -f pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the default
dynamic-plugins-rootvolume with a PVC nameddynamic-plugins-root. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo avoid adding a new volume, you must use the
$patch: replacedirective.
8.1.2. Creating a PVC for the dynamic plugin cache using the Helm Chart Copy linkLink copied to clipboard!
For Helm chart installations, if you require the dynamic plugin cache to persist across pod restarts, you must create a persistent volume claim (PVC) and configure the Helm chart to use it.
Procedure
Create the persistent volume definition. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example uses
ReadWriteOnceas the access mode which prevents multiple replicas from sharing the PVC across different nodes.To run multiple replicas on different nodes, depending on your storage driver, you must use an access mode such as
ReadWriteMany.To apply this PVC to your cluster, run the following command:
oc apply -f pvc.yaml
oc apply -f pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Helm chart to use the PVC. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you configure the Helm chart to use the PVC, you must also include the
extraVolumesdefined in the default Helm chart.
8.2. Configuring the dynamic plugins cache Copy linkLink copied to clipboard!
You can set the following optional dynamic plugin cache parameters in your dynamic-plugins.yaml file:
-
forceDownload: Set the value totrueto force a reinstall of the plugin, bypassing the cache. The default value isfalse. pullPolicy: Similar to theforceDownloadparameter and is consistent with other image container platforms. You can use one of the following values for this key:-
Always: This value compares the image digest in the remote registry and downloads the artifact if it has changed, even if the plugin was previously downloaded. IfNotPresent: This value downloads the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.NoteThe
pullPolicysetting is also applied to the NPM downloading method, althoughAlwayswill download the remote artifact without a digest check. The existingforceDownloadoption remains functional, however, thepullPolicyoption takes precedence. TheforceDownloadoption may be deprecated in a future Developer Hub release.
-
Example dynamic-plugins.yaml file configuration to download the remote artifact without a digest check:
plugins:
- disabled: false
pullPolicy: Always
package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'
plugins:
- disabled: false
pullPolicy: Always
package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'
Chapter 9. Configuring default mounts for Secrets and PVCs Copy linkLink copied to clipboard!
You can configure Persistent Volume Claims (PVCs) and Secrets mount in your Red Hat Developer Hub deployment. Use annotations to define the custom mount paths and specify the containers to mount them to.
9.1. Configuring mount paths for Secrets and PVCs Copy linkLink copied to clipboard!
By default, the mount path is the working directory of the Developer Hub container. If you do not define the mount path, it defaults to /opt/app-root/src.
Procedure
To specify a PVC mount path, add the
rhdh.redhat.com/mount-pathannotation to your configuration file as shown in the following example:Example specifying where the PVC mounts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
rhdh.redhat.com/mount-path-
Specifies which mount path the PVC mounts to (in this case,
/mount/path/from/annotationdirectory). - <my_claim>
- Specifies the PVC to mount.
To specify a Secret mount path, add the
rhdh.redhat.com/mount-pathannotation to your configuration file as shown in the following example:Example specifying where the Secret mounts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <my_secret>
- Specifies the Secret name.
9.2. Mounting Secrets and PVCs to specific containers Copy linkLink copied to clipboard!
By default, Secrets and PVCs mount only to the Red Hat Developer Hub backstage-backend container. You can add the rhdh.redhat.com/containers annotation to your configuration file to specify the containers to mount to.
Procedure
To mount Secrets to all containers, set the
rhdh.redhat.com/containersannotation to*in your configuration file:Example mounting to all containers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantSet
rhdh.redhat.com/containersto*to mount it to all containers in the deployment.To mount to specific containers, separate the names with commas:
Example separating the list of containers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis configuration mounts the
<my_claim>PVC to theinit-dynamic-pluginsandbackstage-backendcontainers.
Chapter 10. Enabling the Red Hat Developer Hub plugin assets cache Copy linkLink copied to clipboard!
By default, Red Hat Developer Hub does not cache plugin assets. You can use a Redis cache store to improve Developer Hub performance and reliability. Configured plugins in Developer Hub receive dedicated cache connections, which are powered by the Keyv Redis client.
Prerequisites
- You have installed Red Hat Developer Hub.
-
You have an active Redis server. For more information on setting up an external Redis server, see the
official Redis documentation.
Procedure
Enable the Developer Hub cache by defining Redis as the cache store type and entering your Redis server connection URL in your
app-config.yamlfile.app-config.yamlfile fragmentbackend: cache: store: redis connection: redis://user:pass@cache.example.com:6379backend: cache: store: redis connection: redis://user:pass@cache.example.com:6379Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the cache for Techdocs by adding the
techdocs.cache.ttlsetting in yourapp-config.yamlfile. This setting specifies how long, in milliseconds, a statically built asset should stay in the cache.app-config.yamlfile fragmenttechdocs: cache: ttl: 3600000techdocs: cache: ttl: 3600000Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optionally, enable the cache for unsupported plugins that support this feature. See their respective documentation for details.