Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Configuring Red Hat Ansible Automation Platform components on Red Hat Ansible Automation Platform Operator


After you have installed Ansible Automation Platform Operator and set up your Ansible Automation Platform components, you can configure them for your desired output.

You can use these instructions to further configure the platform gateway operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

There are two scenarios for deploying Ansible Automation Platform with an external database:

Expand

Scenario

Action required

Fresh install

You must specify a single external database instance for the platform to use for the following:

  • Platform gateway
  • Automation controller
  • Automation hub
  • Event-Driven Ansible
  • Red Hat Ansible Lightspeed (If enabled)

See the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section for help with this.

If using Red Hat Ansible Lightspeed, use the aap-configuring-external-db-with-lightspeed-enabled.yml example.

Existing external database in 2.4

Your existing external database remains the same after upgrading but you must specify the external-postgres-configuration-gateway (spec.database.database_secret) on the Ansible Automation Platform custom resource.

To deploy Ansible Automation Platform with an external database, you must first create a Kubernetes secret with credentials for connecting to the database.

By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.

Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.

Note

The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your platform gateway on a Ansible Automation Platform Operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the platform gateway spec.

Note

Ansible Automation Platform 2.5 supports PostgreSQL 15.

Procedure

  1. Create a postgres_configuration_secret YAML file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> 
    1
    
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" 
    2
    
      port: "<external_port>" 
    3
    
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" 
    4
    
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you want to deploy to.
    2. The resolvable hostname for your database node.
    3. External port defaults to 5432.
    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
    Note

    The following example is for a platform gateway deployment. To configure an external database for all components, use the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section.

  3. When creating your AnsibleAutomationPlatform custom resource object, specify the secret on your spec, following the example below:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: example-aap
      Namespace: aap
    spec:
      database:
         database_secret: automation-platform-postgres-configuration

5.1.2. Troubleshooting an external database with an unexpected DataStyle set

When upgrading the Ansible Automation Platform Operator you may encounter an error like the following:

NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'

Errors like this occur when you have an external database with an unexpected DateStyle set. You can refer to the following steps to resolve this issue.

Procedure

  1. Edit the /var/lib/pgsql/data/postgres.conf file on the database server:

    # vi /var/lib/pgsql/data/postgres.conf
  2. Find and comment out the line:

    #datestyle = 'Redwood, SHOW_TIME'
  3. Add the following setting immediately below the newly-commented line:

    datestyle = 'iso, mdy'
  4. Save and close the postgres.conf file.
  5. Reload the database configuration:

    # systemctl reload postgresql
    Note

    Running this command does not disrupt database operations.

HTTPS redirect for SAML, allows you to log in once and access all of the platform gateway without needing to reauthenticate.

Prerequisites

  • You have successfully configured SAML in the gateway from the Ansible Automation Platform Operator. Refer to Configuring SAML authentication for help with this.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select All Instances and go to your AnsibleAutomationPlatform instance.
  5. Click the ⋮ icon and then select Edit AnsibleAutomationPlatform.
  6. In the YAML view paste the following YAML code under the spec: section:

    spec:
      extra_settings:
        - setting: REDIRECT_IS_HTTPS
          value: '"True"'
  7. Click Save.

Verification

After you have added the REDIRECT_IS_HTTPS setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running:

oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py

5.1.4. Configuring your CSRF settings for your platform gateway Operator ingress

The Red Hat Ansible Automation Platform Operator creates Openshift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress to allow for cross-site requests. You can configure your platform gateway operator ingress under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. For new instances, click Create AnsibleAutomationPlatform.

    1. For existing instances, you can edit the YAML view by clicking the ⋮ icon and then Edit AnsibleAutomationPlatform.
  6. Click Advanced Configuration.
  7. Under Ingres annotations, enter any annotations to add to the ingress.
  8. Under Ingress TLS secret, click the drop-down list and select a secret from the list.
  9. Under YAML view paste in the following code:

    spec:
      extra_settings:
        - setting: CSRF_TRUSTED_ORIGINS
          value:
            - https://my-aap-domain.com
  10. After you have configured your platform gateway, click Create at the bottom of the form view (Or Save in the case of editing existing instances).

Verification

Red Hat OpenShift Container Platform creates the pods. This may take a few minutes. You can view the progress by navigating to Workloads Pods and locating the newly created instance. Verify that the following operator pods provided by the Red Hat Ansible Automation Platform Operator installation from platform gateway are running:

Expand
Operator manager controllers podsAutomation controller podsAutomation hub podsEvent-Driven Ansible (EDA) podsplatform gateway pods

The operator manager controllers for each of the four operators, include the following:

  • automation-controller-operator-controller-manager
  • automation-hub-operator-controller-manager
  • resource-operator-controller-manager
  • aap-gateway-operator-controller-manager
  • ansible-lightspeed-operator-controller-manager
  • eda-server-operator-controller-manager

After deploying automation controller, you can see the addition of the following pods:

  • Automation controller web
  • Automation controller task
  • Mesh ingress
  • Automation controller postgres

After deploying automation hub, you can see the addition of the following pods:

  • Automation hub web
  • Automation hub task
  • Automation hub API
  • Automation hub worker

After deploying EDA, you can see the addition of the following pods:

  • EDA API
  • EDA Activation
  • EDA worker
  • EDA stream
  • EDA Scheduler

After deploying platform gateway, you can see the addition of the following pods:

  • platform gateway
  • platform gateway redis
Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

5.1.5. Configuring custom PostgreSQL settings for Ansible Automation Platform

The postgres_extra_settings variable allows you to pass a list of custom name: value pairs directly to the PostgreSQL configuration file (/var/lib/pgsql/data/postgresql.conf) within the database pod.

Prerequisites

  • You have installed the Ansible Automation Platform Operator.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select All Instances and go to your Ansible Automation Platform instance.
  5. Click the ⋮ icon and then select Edit Ansible Automation Platform.
  6. In the YAML view, locate the spec: section
  7. Add the database section and the required settings under spec:. The following example configures SSL ciphers and the maximum connections:

    spec:
      database:
        postgres_extra_settings:
          - name: max_connections
            value: '1000'
  8. Click Save.

Verification

Inspect the PostgreSQL pod logs to verify the new settings.

Alternatively, you can run the following command to check the settings. Replace <aap postgres pod> with the name of your PostgreSQL pod.

+

$ oc exec -it <aap postgres pod> -- psql -d gateway -c "SHOW max_connections;"

5.1.6. Event-Driven Ansible event stream mTLS configuration variables

You can configure Mutual Transport Layer Security (mTLS) for the Event-Driven Ansible event stream by setting parameters in the AnsibleAutomationPlatform custom resource.

You can configure the following parameters nested under spec.eda.event_stream:.

Expand
VariableDescriptionDefault ValueNotes

mtls

Controls whether mTLS is enabled for the event stream endpoint.

true

Set the value to false to disable event stream mTLS during installation.

mtls_prefix

Customizes the mTLS endpoint prefix for the event stream. You must provide a valid URL prefix.

/mtls/eda-event-streams

The value you provide is used as a prefix for the full endpoint URL. Customizing the full URL path is out of scope.

Custom resource example

The following example shows how to configure the event stream parameters in the AnsibleAutomationPlatform custom resource:

apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
  namespace: ansible-automation-platform
spec:
  eda:
    disabled: false
    event_stream:
      mtls: true
      mtls_prefix: /custom/path/mtls

5.1.7. Cascading client timeouts

Cascading timeouts ensure that if an outer layer of the system times out, inner processes also terminate to prevent resource exhaustion from orphaned requests.

Set the primary timeout at the Gateway level to allow Ansible Automation Platform to synchronize timeouts automatically across component applications.

5.1.7.1. Timeout relationships

The client_request_timeout serves as the primary value. Internal layers follow this logic:

  • The sum of the Envoy request_timeout and the gRPC authentication timeout (gateway_grpc_auth_service_timeout) must be less than the client_request_timeout.
  • The Nginx read timeout (nginx_read_timeout) must be less than or equal to the Envoy request_timeout.
  • The Python web server timeout (python_webserver_timeout) must be less than or equal to the nginx_read_timeout.

5.1.7.2. Timeout grace periods

At the uWSGI layer, the uwsgi_timeout_grace_period allows the application to attempt a graceful shutdown. During this period, the application displays a traceback of the current stack position. If the process does not exit within the grace period, Ansible Automation Platform terminates it.

5.1.8. Increase the OpenShift Container Platform Route timeout

During high-volume API operations, such as Configuration as Code (CasC) restores, the OpenShift Route might time out if the operation exceeds the default 30-second window.

You must increase the client_request_timeout in the AnsibleAutomationPlatform Custom Resource (CR) to resolve HTTP 504 (Gateway Timeout) or HTTP 503 (Service Unavailable) errors.

Prerequisites

  • Access to the OpenShift Container Platform web console with administrator privileges.
  • Update the Ansible Automation Platform 2.5 operator to the latest version.

Procedure

  1. Log in to the OpenShift web console.
  2. Navigate to Installed Operators > Ansible Automation Platform > All Instances.
  3. Select your AnsibleAutomationPlatform instance.
  4. Click the YAML tab.
  5. In the spec: section, add the route_annotations to extend the timeout:

    spec:
      route_annotations: |
        haproxy.router.openshift.io/timeout: 180s
  6. Click Save.

Verification

  1. Navigate to Networking > Routes in the OpenShift console.
  2. Select the route for your Ansible Automation Platform instance.
  3. Verify the Annotations section contains the updated timeout value.

5.1.9. Frequently asked questions on platform gateway

Manage your Ansible Automation Platform deployment and troubleshoot common issues with these frequently asked questions. Learn about resource management, logging, and error recovery for your components.

If I delete my Ansible Automation Platform deployment will I still have access to automation controller?
No, automation controller, automation hub, and Event-Driven Ansible are nested within the deployment and are also deleted.
How must I manage parameters when adding or removing them in the Ansible Automation Platform custom resource (CR) hierarchy?
When adding parameters, you can add it to the Ansible Automation Platform custom resource (CR) only and those parameters will work their way down to the nested CRs.

When removing parameters, you have to remove them both from the Ansible Automation Platform CR and the nested CR, for example, the Automation Controller CR.

Something went wrong with my deployment but I’m not sure what, how can I find out?
You can follow along in the command line while the operator is reconciling, this can be helpful for debugging. Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on.
Is it still possible to view individual component logs?
When troubleshooting you should examine the Ansible Automation Platform instance for the main logs and then each individual component (EDA, AutomationHub, AutomationController) for more specific information.
Where can I view the condition of an instance?
To display status conditions click into the instance, and look under the Details or Events tab. Alternatively, to display the status conditions you can run the get command: oc get automationcontroller <instance-name> -o jsonpath=Pipe "| jq"
Can I track my migration in real time?
To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command: oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f
I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do?
You must update your Ansible Automation Platform instance to include the REDIRECT_IS_HTTPS extra setting. See Enabling single sign-on (SSO) for platform gateway on OpenShift Container Platform for help with this.

You can use these instructions to configure the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.

Note

When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.

5.2.1. Prerequisites

  • You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.
  • For automation controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.
  • For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.

5.2.1.1. Configuring your controller image pull policy

Use this procedure to configure the image pull policy on your automation controller.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.controller: section.
  7. Configure the image pull policy and resource requirements under the controller: section:

    spec:
      controller:
        image_pull_policy: IfNotPresent  # Options: Always, Never, IfNotPresent
        image_pull_secrets:
          - pull-secret-name
        web_resource_requirements:
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
        task_resource_requirements:
          limits:
            cpu: 2000m
            memory: 4Gi
          requests:
            cpu: 1000m
            memory: 2Gi
        ee_resource_requirements:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 512Mi
        redis_resource_requirements:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 512Mi
        postgres_resource_requirements:
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
        postgres_storage_requirements:
          limits:
            storage: 10Gi
          requests:
            storage: 8Gi
        replicas: 1
        garbage_collect_secrets: false
        create_preload_data: true
  8. Click Save.

    Note

    These settings apply to the automation controller component managed by this Ansible Automation Platform instance. If you specified an existing controller under controller.name, these settings will update that instance.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.2.1.2. Configuring your controller LDAP security

You can configure your LDAP SSL configuration for automation controller through any of the following options:

  • The automation controller user interface.
  • The platform gateway user interface. See the Configuring LDAP authentication section of the Access management and authentication guide for additional steps.
  • The following procedure steps.

Procedure

  1. Create a secret in your Ansible Automation Platform namespace for the bundle-ca.crt file (the filename must be bundle-ca.crt):

    $ oc create secret -n aap generic bundle-ca-secret --from-file=bundle-ca.crt
    Note

    The target filename for this operation must be bundle-ca.crt and the secret name should be bundle-ca-secret.

  2. Add the bundle_cacert_secret to the Ansible Automation Platform customer resource:

    ...
    spec:
      bundle_cacert_secret: bundle-ca-secret
    ...

    Verification

    You can verify the expected certificate by running:

    oc get deployments -l 'app.kubernetes.io/component=aap-gateway'

    Followed by:

    oc exec -it deployment.apps/<gateway-deployment-name-from-above> -- openssl x509 -in /etc/pki/tls/certs/ca-bundle.crt -noout -text

5.2.1.3. Configure automation controller operator route options

The Red Hat Ansible Automation Platform Operator installation form provides advanced options to configure your automation controller operator route.

Important

You must assign a unique metadata.name to each custom resource (CR) in your namespace. If you assign an AutomationControllerMeshIngress the same name as your Ansible Automation Platform installation, the operator overrides default routes and services. This conflict causes the platform installation to fail.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.controller: section.
  7. Configure the route options under the controller: section:

    spec:
      controller:
        ingress_type: Route
        route_host: controller.example.com  # Custom hostname for the route
        route_tls_termination_mechanism: Edge  # Options: Edge, Passthrough
        route_tls_secret: controller-tls-secret  # Optional: TLS credential secret
        projects_persistence: false  # Enable/disable persistence for /var/lib/projects
  8. Click Save.

    Note

    Edge termination is recommended for most instances. After configuring your route, you can customize additional route settings by adding them to the controller: section in the Ansible Automation Platform custom resource.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.2.1.4. Configuring the ingress type for your automation controller operator

The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.controller: section.
  7. Configure the ingress options under the controller: section:

    spec:
      controller:
        ingress_type: Ingress
        ingress_annotations: |
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
        ingress_tls_secret: controller-ingress-tls-secret
  8. Click Save.

    Note

    These ingress settings apply to the automation controller component managed by this Ansible Automation Platform instance. The operator automatically updates the ingress configuration for the controller.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

Verification

After you have configured your automation controller ingress settings, Red Hat OpenShift Container Platform updates the pods. This may take a few minutes.

You can view the progress by navigating to Workloads Pods and locating the newly created instance.

Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:

Expand
Operator manager controllersAutomation controllerAutomation hubEvent-Driven Ansible (EDA)

The operator manager controllers for each of the three operators, include the following:

  • automation-controller-operator-controller-manager
  • automation-hub-operator-controller-manager
  • resource-operator-controller-manager
  • aap-gateway-operator-controller-manager
  • ansible-lightspeed-operator-controller-manager
  • eda-server-operator-controller-manager

After deploying automation controller, you can see the addition of the following pods:

  • controller
  • controller-postgres
  • controller-web
  • controller-task

After deploying automation hub, you can see the addition of the following pods:

  • hub-api
  • hub-content
  • hub-postgres
  • hub-redis
  • hub-worker

After deploying EDA, you can see the addition of the following pods:

  • eda-activation-worker
  • da-api
  • eda-default-worker
  • eda-event-stream
  • eda-scheduler
Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.

By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.

Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.

Note

The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec.

Note

Ansible Automation Platform 2.5 supports PostgreSQL 15.

Procedure

  1. Create a postgres_configuration_secret YAML file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> 
    1
    
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" 
    2
    
      port: "<external_port>" 
    3
    
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" 
    4
    
      sslmode: "prefer" 
    5
    
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you want to deploy to.
    2. The resolvable hostname for your database node.
    3. External port defaults to 5432.
    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
    5. The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, and verify-full.
  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
  3. When creating your AnsibleAutomationPlatform custom resource object, specify the secret under the controller section in your spec, following the example below:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      controller:
        name: controller-dev  # Optional: specify existing instance or custom name
        postgres_configuration_secret: external-postgres-configuration
    Note

    If you have an existing automation controller instance, specify its name under controller.name to apply these settings to the existing instance. If you omit the name field, the operator will create a new instance with the default name pattern <aap-instance-name>-controller.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.2.3. Finding and deleting PVCs

A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. This persistence is a key feature of static provisioning. If you redeploy an instance using the same name, the Operator must bind to these existing PVCs, allowing for data continuity across deployments. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.

Procedure

  1. List the existing PVCs in your deployment namespace:

    oc get pvc -n <namespace>
  2. Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
  3. Delete the old PVC:

    oc delete pvc -n <namespace> <pvc-name>

5.3. Configuring automation hub on Red Hat OpenShift Container Platform web console

You can use these instructions to configure the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.

Note

When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.

5.3.1. Prerequisites

  • You have installed the Ansible Automation Platform Operator in Operator Hub.

Automation hub requires ReadWriteMany file-based storage, Azure Blob storage, or Amazon S3 storage for operation so that multiple pods can access shared content, such as collections.

The process for configuring object storage on the AutomationHub CR is similar for Amazon S3 and Azure Blob Storage.

If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany. ReadWriteMany is the default storage option.

In addition, OpenShift Data Foundation provides a ReadWriteMany or S3 implementation. Also, you can set up NFS storage configuration to support ReadWriteMany. This, however, introduces the NFS server as a potential, single point of failure.

5.3.1.1.1. Provisioning OCP storage with ReadWriteMany access mode

To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany access mode.

Procedure

  1. Go to Storage PersistentVolume.
  2. Click Create PersistentVolume.
  3. In the first step, update the accessModes from the default ReadWriteOnce to ReadWriteMany.

    1. See Provisioning to update the access mode. for a detailed overview.
  4. Complete the additional steps in this section to create the persistent volume claim (PVC).
5.3.1.1.2. Configuring object storage on Amazon S3

Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AnsibleAutomationPlatform custom resource (CR), or you can configure it for an existing instance.

Prerequisites

  • Create an Amazon S3 bucket to store the objects.
  • Note the name of the S3 bucket.

Procedure

  1. Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called test-s3:

    $ oc -n $HUB_NAMESPACE apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: 'test-s3'
    stringData:
      s3-access-key-id: $S3_ACCESS_KEY_ID
      s3-secret-access-key: $S3_SECRET_ACCESS_KEY
      s3-bucket-name: $S3_BUCKET_NAME
      s3-region: $S3_REGION
    EOF
  2. Add the secret to the Ansible Automation Platform custom resource (CR) under the hub section in the spec:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        storage_type: S3
        object_storage_s3_secret: test-s3
    Note

    If you have an existing automation hub instance, specify its name using hub.name: existing-hub-name to apply these settings to the existing instance.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

  3. If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance.

    $ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
5.3.1.1.3. Configuring object storage on Azure Blob

Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AnsibleAutomationPlatform custom resource (CR), or you can configure it for an existing instance.

Prerequisites

  • Create an Azure Storage blob container to store the objects.
  • Note the name of the blob container.

Procedure

  1. Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called test-azure:

    $ oc -n $HUB_NAMESPACE apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: 'test-azure'
    stringData:
      azure-account-name: $AZURE_ACCOUNT_NAME
      azure-account-key: $AZURE_ACCOUNT_KEY
      azure-container: $AZURE_CONTAINER
      azure-container-path: $AZURE_CONTAINER_PATH
      azure-connection-string: $AZURE_CONNECTION_STRING
    EOF
  2. Add the secret to the Ansible Automation Platform custom resource (CR) under the hub section in the spec:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        storage_type: azure
        object_storage_azure_secret: test-azure
    Note

    If you have an existing automation hub instance, specify its name using hub.name: existing-hub-name to apply these settings to the existing instance.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

  3. If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance.

    $ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api

5.3.1.2. Configure your automation hub operator route options

The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.hub: section.
  7. Configure the route options under the hub: section:

    spec:
      hub:
        ingress_type: Route
        route_host: hub.example.com  # Custom hostname for the route
        route_tls_termination_mechanism: Edge  # Options: Edge, Passthrough
        route_tls_secret: hub-tls-secret  # Optional: TLS credential secret
  8. Click Save.

    Note

    Edge termination is recommended for most instances. After configuring your route, you can customize additional route settings by adding them to the hub: section in the Ansible Automation Platform custom resource.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.3.1.3. Configuring the ingress type for your automation hub operator

The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.hub: section.
  7. Configure the ingress options under the hub: section:

    spec:
      hub:
        ingress_type: Ingress
        ingress_annotations: |
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
        ingress_tls_secret: hub-ingress-tls-secret
  8. Click Save.

    Note

    These ingress settings apply to the automation hub component managed by this Ansible Automation Platform instance. The operator automatically updates the ingress configuration for the hub.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

Verification

After you have configured your automation hub ingress settings, Red Hat OpenShift Container Platform updates the pods. This may take a few minutes.

You can view the progress by navigating to Workloads Pods and locating the newly created instance.

Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:

Expand
Operator manager controllersAutomation controllerAutomation hub

The operator manager controllers for each of the 3 operators, include the following:

  • automation-controller-operator-controller-manager
  • automation-hub-operator-controller-manager
  • resource-operator-controller-manager

After deploying automation controller, you will see the addition of these pods:

  • controller
  • controller-postgres

After deploying automation hub, you will see the addition of these pods:

  • hub-api
  • hub-content
  • hub-postgres
  • hub-redis
  • hub-worker
Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

5.3.2. Finding the automation hub route

You can access the automation hub through the platform gateway or through the following procedure.

Procedure

  1. Log into Red Hat OpenShift Container Platform.
  2. Navigate to Networking Routes.
  3. Under Location, click on the URL for your automation hub instance.

Verification

The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.

Note

If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select Workloads Secrets and open controller-admin-password. From there you can copy the password and paste it into the Automation hub password field.

For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.

By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.

You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.

Note

The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.

Note

Ansible Automation Platform 2.5 supports PostgreSQL 15.

Procedure

  1. Create a postgres_configuration_secret YAML file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> 
    1
    
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" 
    2
    
      port: "<external_port>" 
    3
    
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" 
    4
    
      sslmode: "prefer" 
    5
    
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you want to deploy to.
    2. The resolvable hostname for your database node.
    3. External port defaults to 5432.
    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
    5. The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, and verify-full.
  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
  3. When creating your AnsibleAutomationPlatform custom resource object, specify the secret under the hub section in your spec, following the example below:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        name: hub-dev  # Optional: specify existing instance or custom name
        postgres_configuration_secret: external-postgres-configuration
        storage_type: file
        file_storage_storage_class: <your-read-write-many-storage-class>
        file_storage_size: 10Gi
    Note

    If you have an existing automation hub instance, specify its name under hub.name to apply these settings to the existing instance. If you omit the name field, the operator will create a new instance with the default name pattern <aap-instance-name>-hub.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.3.3.1. Enabling the hstore extension for the automation hub PostgreSQL database

The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.

This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.

If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.

If the hstore extension is not enabled before installation, a failure raises during database migration.

Procedure

  1. Check if the extension is available on the PostgreSQL server (automation hub database).

    $ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
  2. Where the default value for <automation hub database> is automationhub.

    Example output with hstore available:

    name  | default_version | installed_version |comment
    ------+-----------------+-------------------+---------------------------------------------------
     hstore | 1.7           |                   | data type for storing sets of (key, value) pairs
    (1 row)

    Example output with hstore not available:

     name | default_version | installed_version | comment
    ------+-----------------+-------------------+---------
    (0 rows)
  3. On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.

    To install the RPM package, use the following command:

    dnf install postgresql-contrib
  4. Load the hstore PostgreSQL extension into the automation hub database with the following command:

    $ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"

    In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled.

    name | default_version | installed_version | comment
    -----+-----------------+-------------------+------------------------------------------------------
    hstore  |     1.7      |       1.7         | data type for storing sets of (key, value) pairs
    (1 row)

5.3.4. Finding and deleting PVCs

A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. This persistence is a key feature of static provisioning. If you redeploy an instance using the same name, the Operator must bind to these existing PVCs, allowing for data continuity across deployments. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.

Procedure

  1. List the existing PVCs in your deployment namespace:

    oc get pvc -n <namespace>
  2. Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
  3. Delete the old PVC:

    oc delete pvc -n <namespace> <pvc-name>

5.3.5. Additional configurations

A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration:

spec:
  pulp_settings:
    ansible_collect_download_count: true

When ansible_collect_download_count is enabled, automation hub will display a download count by the collection.

5.3.6. Adding allowed registries to the automation controller image configuration

Before you can deploy a container image in automation hub, you must add the registry to the allowedRegistries in the automation controller image configuration. To do this you can copy and paste the following code into your automation controller image YAML.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Home Search.
  3. Select the Resources drop-down list and type "Image".
  4. Select Image (config,openshift.io/v1).
  5. Click Cluster under the Name heading.
  6. Select the YAML tab.
  7. Paste in the following under spec value:

    spec:
      registrySources:
        allowedRegistries:
        - quay.io
        - registry.redhat.io
        - image-registry.openshift-image-registry.svc:5000
        - <OCP route for your automation hub>
  8. Click Save.

5.3.7. Configuring content signing for Ansible Automation Platform Hub Operator

As an automation administrator for your organization, you can configure Ansible Automation Platform Hub Operator for signing and publishing Ansible content collections from different groups within your organization.

For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure that they have not been changed after they were uploaded to automation hub.

To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.

Prerequisites

  • A GPG key pair. If you do not have one, you can generate one using the gpg --full-generate-key command.
  • Your public-private key pair has proper access for configuring content signing on Ansible Automation Platform Hub Operator.

Procedure

  1. Create a ConfigMap for signing scripts. The ConfigMap you create contains the scripts used by the signing service for collections and container images.

    Note

    This script is used as part of the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable.

    The script prints out a JSON structure with the following format.

    {"file": "filename", "signature": "filename.asc"}

    All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.

    Example: The following script produces signatures for content:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: signing-scripts
    data:
      collection_sign.sh: |-
          #!/usr/bin/env bash
    
          FILE_PATH=$1
          SIGNATURE_PATH=$1.asc
    
          ADMIN_ID="$PULP_SIGNING_KEY_FINGERPRINT"
          PASSWORD="password"
    
          # Create a detached signature
          gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \
            $PASSWORD --homedir /var/lib/pulp/.gnupg --detach-sign --default-key $ADMIN_ID \
            --armor --output $SIGNATURE_PATH $FILE_PATH
    
          # Check the exit status
          STATUS=$?
          if [ $STATUS -eq 0 ]; then
            echo {\"file\": \"$FILE_PATH\", \"signature\": \"$SIGNATURE_PATH\"}
          else
            exit $STATUS
          fi
      container_sign.sh: |-
        #!/usr/bin/env bash
    
        # galaxy_container SigningService will pass the next 4 variables to the script.
        MANIFEST_PATH=$1
        FINGERPRINT="$PULP_SIGNING_KEY_FINGERPRINT"
        IMAGE_REFERENCE="$REFERENCE"
        SIGNATURE_PATH="$SIG_PATH"
    
        # Create container signature using skopeo
        skopeo standalone-sign \
          $MANIFEST_PATH \
          $IMAGE_REFERENCE \
          $FINGERPRINT \
          --output $SIGNATURE_PATH
    
        # Optionally pass the passphrase to the key if password protected.
        # --passphrase-file /path/to/key_password.txt
    
        # Check the exit status
        STATUS=$?
        if [ $STATUS -eq 0 ]; then
          echo {\"signature_path\": \"$SIGNATURE_PATH\"}
        else
          exit $STATUS
        fi
  2. Create a secret for your GnuPG private key. This secret securely stores the GnuPG private key you use for signing.

    gpg --export --armor <your-gpg-key-id> > signing_service.gpg
    
    oc create secret generic signing-galaxy --from-file=signing_service.gpg

    The secret must have a key named signing_service.gpg.

  3. Configure the AnsibleAutomationPlatform CR.

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: aap-hub-signing-sample
    spec:
      hub:
        signing_secret: "signing-galaxy"
        signing_scripts_configmap: "signing-scripts"

5.4. Configure static storage for Ansible Automation Platform

Configure static storage when your environment does not support dynamic volume provisioning. This process ensures the Ansible Automation Platform Operator adopts manually created Persistent Volume Claims by using specific naming conventions.

5.4.1. Understand static provisioning in the Ansible Automation Platform Operator

By default, the Ansible Automation Platform Operator uses dynamic provisioning to create the required storage for components such as the database and automation hub.

If your environment does not allow dynamic provisioning, you must use static provisioning.

With static provisioning, you manually create Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) before you deploy the AnsibleAutomationPlatform custom resource. When the Operator starts the deployment, it searches the namespace for PVCs that match its internal naming conventions. If a matching PVC exists, the Operator binds to that claim instead of attempting to provision new storage.

Static provisioning also enables data persistence during redeployments. If you delete an AnsibleAutomationPlatform instance, the Operator does not delete the associated PVCs. You can redeploy the instance using the same name to reconnect to the existing data.

5.4.2. Pre-create Persistent Volume Claims for manual provisioning

Follow this process to manually prepare storage for an Ansible Automation Platform installation when dynamic provisioning is disabled.

Prerequisites

  • You have an active OpenShift Container Platform CLI (oc) session.
  • You have defined Persistent Volumes (PVs) that meet the minimum size and access mode requirements for your components.

Procedure

  1. Identify the name you intend to use for your AnsibleAutomationPlatform deployment (for example, myaap).
  2. Create a PVC manifest for the PostgreSQL database using the required naming convention: postgres-15-<deployment_name>-0.
  3. Ensure the accessModes and resources.requests.storage match your manually provisioned PV.
  4. Apply the PVC manifest:

    oc apply -f postgres-pvc.yaml
  5. Repeat these steps for other components, such as automation hub, using the correct naming conventions.
  6. Leave the storage_class fields empty or omit them from the specification. This forces the Operator to use the pre-created PVCs.

    Note

    Unlike core components, the AnsibleAutomationPlatformBackup and Restore custom resources provide a backup_pvc parameter. You must use this parameter to specify your custom PVC name instead of relying on naming conventions.

Verification

  • Check the status of the PVCs to ensure they are in a Bound state:

    oc get pvc -n <namespace>

5.4.3. PVC naming conventions for Ansible Automation Platform components

The Operator must find PVCs with exact names to adopt them for static provisioning. Replace <instance_name> with the name of your AnsibleAutomationPlatform custom resource.

Expand
ComponentRequired PVC NameDefault Access Mode

Ansible Automation Platform Database

postgres-15-<aap_cr_name>-postgres-15-0

ReadWriteOnce

Automation Hub Storage

<instance_name>-hub-file-storage

(Required when storage_type is set to file)

ReadWriteMany

Automation Hub Redis Persistence

<instance_name>-hub-redis-data

ReadWriteOnce

5.5. Deploying Redis on Red Hat Ansible Automation Platform Operator

When you create an Ansible Automation Platform instance through the Ansible Automation Platform Operator, standalone Redis is assigned by default. If you would prefer to deploy clustered Redis, you can use the following procedure.

For more information about Redis, refer to Caching and queueing system in the Planning your installation guide.

Important

Switching Redis modes on an existing instance is not supported and can lead to unexpected consequences, including data loss. To change the Redis mode, you must deploy a new instance.

Prerequisites

  • You have installed an Ansible Automation Platform Operator deployment.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Details tab.
  5. On the Ansible Automation Platform tile click Create instance.

    1. For existing instances, you can edit the YAML view by clicking the ⋮ icon and then Edit AnsibleAutomationPlatform.
    2. Change the redis_mode value to "cluster".
    3. Click Reload, then Save.
  6. Click to expand Advanced configuration.
  7. For the Redis Mode list, select Cluster.
  8. Configure the rest of your instance as necessary, then click Create.

Verification

Your instance deploys with a cluster Redis with 6 Redis replicas as default.

Note

You can modify your automation hub default redis cache PVC volume size, for help with this see, Modifying the default redis cache PVC volume size automation hub.

Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo