Este contenido no está disponible en el idioma seleccionado.

7.2. Upgrading your Independent Mode Setup


Follow the steps in the sections ahead to upgrade your independent mode Setup.

7.2.1. Upgrading the Red Hat Gluster Storage Cluster

To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.

7.2.2. Upgrading/Migration of Heketi in RHGS node

Note

If Heketi is in an Openshift node, then skip this section and see Section 7.2.4.1, “Upgrading Heketi in Openshift node” instead.

Important

  • In OCS 3.11, upgrade of Heketi in RHGS node is not supported. Hence, you have to migrate heketi to a new heketi pod.
  • Ensure to migrate to the supported heketi deployment now, as there might not be a migration path in the future versions.
  • Ensure that cns-deploy rpm is installed in the master node. This provides template files necessary to setup heketi pod.
    # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
    Copy to Clipboard Toggle word wrap
    # yum install cns-deploy
    Copy to Clipboard Toggle word wrap
  1. Use the newly created containerized Red Hat Gluster Storage project on the master node:
    # oc project <project-name>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc project gluster
    Copy to Clipboard Toggle word wrap
  2. Execute the following command on the master node to create the service account:
    # oc create -f /usr/share/heketi/templates/heketi-service-account.yaml
    serviceaccount/heketi-service-account created
    Copy to Clipboard Toggle word wrap
  3. Execute the following command on the master node to install the heketi template:
    # oc create -f /usr/share/heketi/templates/heketi-template.yaml
    template.template.openshift.io/heketi created
    Copy to Clipboard Toggle word wrap
  4. Verify if the templates are created
    # oc get templates
    
    NAME            DESCRIPTION                          PARAMETERS    OBJECTS
    heketi          Heketi service deployment template   5 (3 blank)   3
    Copy to Clipboard Toggle word wrap
  5. Execute the following command on the master node to grant the heketi Service Account the necessary privileges:
    # oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account
    role "edit" added: "system:serviceaccount:gluster:heketi-service-account"
    
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-scc-to-user privileged -z heketi-service-account
    scc "privileged" added to: ["system:serviceaccount:gluster:heketi-service-account"]
    
    Copy to Clipboard Toggle word wrap
  6. On the RHGS node, where heketi is running, execute the following commands:
    1. Create the heketidbstorage volume:
      # heketi-cli volume create --size=2 --name=heketidbstorage
      Copy to Clipboard Toggle word wrap
    2. Mount the volume:
      # mount  -t glusterfs 192.168.11.192:heketidbstorage /mnt/
      Copy to Clipboard Toggle word wrap
      where 192.168.11.192 is one of the RHGS node.
    3. Stop the heketi service:
      # systemctl stop heketi
      Copy to Clipboard Toggle word wrap
    4. Disable the heketi service:
      # systemctl disable heketi
      Copy to Clipboard Toggle word wrap
    5. Copy the heketi db to the heketidbstorage volume:
      # cp /var/lib/heketi/heketi.db /mnt/
      Copy to Clipboard Toggle word wrap
    6. Unmount the volume:
      # umount /mnt
      Copy to Clipboard Toggle word wrap
    7. Copy the following files from the heketi node to the master node:
      # scp   /etc/heketi/heketi.json  topology.json   /etc/heketi/heketi_key  OCP_master_node:/root/
      Copy to Clipboard Toggle word wrap
      where OCP_master_node is the hostname of the master node.
  7. On the master node, set the environment variables for the following three files that were copied from the heketi node. Add the following lines to ~/.bashrc file and run the bash command to apply and save the changes:
    export SSH_KEYFILE=heketi_key
    export TOPOLOGY=topology.json
    export HEKETI_CONFIG=heketi.json
    
    Copy to Clipboard Toggle word wrap

    Note

    If you have changed the value for "keyfile" in /etc/heketi/heketi.json to a different value, change here accordingly.
  8. Execute the following command to create a secret to hold the configuration file:
    # oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY}
    
    secret/heketi-config-secret created
    
    Copy to Clipboard Toggle word wrap
  9. Execute the following command to label the secret:
    # oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
    
    secret/heketi-config-secret labeled
    
    
    Copy to Clipboard Toggle word wrap
  10. Get the IP addresses of all the glusterfs nodes, from the heketi-gluster-endpoints.yml file. For example:
    # cat heketi-gluster-endpoints.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: heketi-storage-endpoints
    subsets:
    - addresses:
      - ip: 192.168.11.208
      ports:
      - port: 1
    - addresses:
      - ip: 192.168.11.176
      ports:
      - port: 1
    - addresses:
      - ip: 192.168.11.192
      ports:
      - port: 1
    
    
    
    Copy to Clipboard Toggle word wrap
    In the above example, 192.168.11.208, 192.168.11.176, 192.168.11.192 are the glusterfs nodes.
  11. Execute the following command to create the endpoints:
    # oc create -f ./heketi-gluster-endpoints.yaml
    
    
    Copy to Clipboard Toggle word wrap
    # cat heketi-gluster-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: heketi-storage-endpoints
    spec:
      ports:
      - port: 1
    
    
    
    Copy to Clipboard Toggle word wrap
  12. Execute the following command to create the service:
    # oc create -f ./heketi-gluster-service.yaml
    Copy to Clipboard Toggle word wrap
  13. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
    # # oc process heketi | oc create -f -
    
    service/heketi created
    route.route.openshift.io/heketi created
    deploymentconfig.apps.openshift.io/heketi created
    
    Copy to Clipboard Toggle word wrap
  14. To verify if Heketi is migrated execute the following command on the master node:
    # oc rsh po/<heketi-pod-name>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc rsh po/heketi-1-p65c6
    Copy to Clipboard Toggle word wrap
  15. Execute the following command to check the cluster IDs
    # heketi-cli cluster list
    Copy to Clipboard Toggle word wrap
    From the output verify if the cluster ID matches with the old cluster.

7.2.3. Upgrading if existing version deployed using cns-deploy

7.2.3.1. Upgrading Heketi in Openshift node

The following commands must be executed on the client machine.
  1. Execute the following command to update the heketi client and cns-deploy packages:
    # yum update cns-deploy -y
      # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file
    # oc rsh <heketi_pod_name>
      # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'`
      # exit
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the heketi template.
    # oc delete templates heketi
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to get the current HEKETI_ADMIN_KEY.
    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
    oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to install the heketi template.
    # oc create -f /usr/share/heketi/templates/heketi-template.yaml
      template "heketi" created
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to grant the heketi Service Account the necessary privileges.
    # oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account
      # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap
    For example,
    # oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
      # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to generate a new heketi configuration file.
    # sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
    Copy to Clipboard Toggle word wrap
    • The BLOCK_HOST_SIZE parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required.
    • Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.

      Note

      JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
  8. Note

    If the heketi-config-secret file already exists, then delete the file and run the following command.
    Execute the following command to create a secret to hold the configuration file.
    # oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json
    Copy to Clipboard Toggle word wrap
  9. Execute the following command to delete the deployment configuration, service, and route for heketi:
    # oc delete deploymentconfig,service,route heketi
    Copy to Clipboard Toggle word wrap
  10. Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, and HEKETI_EXECUTOR parameters.
    # oc edit template heketi
    parameters:
      - description: Set secret for those creating volumes as type _user_
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user _admin_
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: ssh
      - description: Set the fstab path, file that is populated with bricks that heketi
          creates
        displayName: heketi fstab path
        name: HEKETI_FSTAB
        value: /etc/fstab
      - description: Set the hostname for the route URL
          displayName: heketi route name
          name: HEKETI_ROUTE
          value: heketi-storage
        - displayName: heketi container image name
          name: IMAGE_NAME
          required: true
          value: rhgs3/rhgs-volmanager-rhel7:v3.10
        - description: A unique name to identify this heketi service, useful for running multiple
            heketi instances
          displayName: GlusterFS cluster name
          name: CLUSTER_NAME
          value: storage
    Copy to Clipboard Toggle word wrap
  11. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
    # oc process heketi | oc create -f -
    
      service "heketi" created
      route "heketi" created
      deploymentconfig "heketi" created
    Copy to Clipboard Toggle word wrap
  12. Execute the following command to verify that the containers are running:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example:
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      heketi-1-zpw4d                   1/1       Running   0          3h
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap

7.2.3.2. Upgrading Gluster Block

Execute the following steps to upgrade gluster block.
  1. Execute the following command to upgrade the gluster block:
    # yum update gluster-block
    Copy to Clipboard Toggle word wrap
  2. Enable and start the gluster block service:
    # systemctl enable gluster-blockd
      # systemctl start gluster-blockd
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to update the heketi client and cns-deploy packages
      # yum update cns-deploy -y
      # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  4. To use gluster block, add the following two parameters to the glusterfs section in the heketi configuration file at /etc/heketi/heketi.JSON:
    auto_create_block_hosting_volume
      block_hosting_volume_size
    Copy to Clipboard Toggle word wrap
    Where:
    auto_create_block_hosting_volume: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value to true.
    block_hosting_volume_size: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.
    For example:
    .....
      .....
      "glusterfs" : {
    
    
                      "executor" : "ssh",
    
                      "db" : "/var/lib/heketi/heketi.db",
    
                      "sshexec" : {
                      "rebalance_on_expansion": true,
                      "keyfile" : "/etc/heketi/private_key"
                      },
    
                      "auto_create_block_hosting_volume": true,
    
                      "block_hosting_volume_size": 500G
              },
      .....
      .....
    
    
    Copy to Clipboard Toggle word wrap
  5. Restart the Heketi service:
    # systemctl restart heketi
    Copy to Clipboard Toggle word wrap

    Note

    This step is not applicable if heketi is running as a pod in the Openshift cluster.
  6. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
    # oc delete dc <gluster-block-dc>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc delete dc glusterblock-provisioner-dc
    Copy to Clipboard Toggle word wrap
  7. Execute the following commands to deploy the gluster-block provisioner:
    # sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
    For example:
    # sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  8. Delete the following resources from the old pod
    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    Copy to Clipboard Toggle word wrap
    # oc delete serviceaccounts glusterblock-registry-provisioner
    Copy to Clipboard Toggle word wrap
  9. Execute the following command to create a glusterblock-provisioner.
    # oc process <gluster_block_provisioner_template> | oc create -f -
    Copy to Clipboard Toggle word wrap

7.2.4. Upgrading if existing version deployed using Ansible

7.2.4.1. Upgrading Heketi in Openshift node

The following commands must be executed on the client machine.
  1. Execute the following command to update the heketi client:
    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file
    # oc rsh <heketi_pod_name>
      # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'`
      # exit
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to get the current HEKETI_ADMIN_KEY.
    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
    oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  4. Execute the following step to edit the template:
      	# oc get templates
      	NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
      	glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
      				  template
      	glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
      				  template
      	heketi			  Heketi service deployment  7 (3 blank)	3
      				  template
    Copy to Clipboard Toggle word wrap
    If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION and CLUSTER_NAME as shown in the example below.
    # oc edit template heketi
    - description: Set the executor type, kubernetes or ssh
      displayName: heketi executor type
      name: HEKETI_EXECUTOR
      value: ssh
    - description: Set the fstab path, file that is populated with bricks that heketi creates
      displayName: heketi fstab path
      name: HEKETI_FSTAB
      value: /etc/fstab
    - description: Set the hostname for the route URL
      displayName: heketi route name
      name: HEKETI_ROUTE
      value: heketi-storage
    - displayName: heketi container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-volmanager-rhel7
    - displayName: heketi container image version
      name: IMAGE_VERSION
      required: true
      value: v3.10
    - description: A unique name to identify this heketi service, useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
    If the template has only IMAGE_NAME, then edit the template to change the HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, and CLUSTER_NAME as shown in the example below.
    - description: Set the executor type, kubernetes or ssh
      displayName: heketi executor type
      name: HEKETI_EXECUTOR
      value: ssh
    - description: Set the fstab path, file that is populated with bricks that heketi creates
      displayName: heketi fstab path
      name: HEKETI_FSTAB
      value: /etc/fstab
    - description: Set the hostname for the route URL
      displayName: heketi route name
      name: HEKETI_ROUTE
      value: heketi-storage
    - displayName: heketi container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-volmanager-rhel7:v3.10
    - description: A unique name to identify this heketi service, useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the deployment configuration, service, and route for heketi:
    # oc delete deploymentconfig,service,route heketi-storage
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to get the current HEKETI_ADMIN_KEY.
    oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
    # oc process heketi | oc create -f -
    
      service "heketi" created
      route "heketi" created
      deploymentconfig "heketi" created
    Copy to Clipboard Toggle word wrap
  8. Execute the following command to verify that the containers are running:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example:
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      heketi-1-zpw4d                   1/1       Running   0          3h
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap

7.2.4.2. Upgrading Gluster Block if Deployed by Using Ansible

Execute the following steps to upgrade gluster block.
  1. Execute the following command to upgrade the gluster block:
    # yum update gluster-block
    Copy to Clipboard Toggle word wrap
  2. Enable and start the gluster block service:
    # systemctl enable gluster-blockd
      # systemctl start gluster-blockd
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to update the heketi client
    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  4. Restart the Heketi service:
    # systemctl restart heketi
    Copy to Clipboard Toggle word wrap

    Note

    This step is not applicable if heketi is running as a pod in the Openshift cluster.
  5. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
    # oc delete dc <gluster-block-dc>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc delete dc glusterblock-provisioner-dc
    Copy to Clipboard Toggle word wrap
  6. Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
    # oc get templates
    	NAME			  DESCRIPTION		               PARAMETERS	OBJECTS
    	glusterblock-provisioner  glusterblock provisioner template    3 (2 blank)	4
    	glusterfs		  GlusterFS DaemonSet template 	       5 (1 blank)	1
    	heketi			  Heketi service deployment template   7 (3 blank)	3
    Copy to Clipboard Toggle word wrap
    If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:
    # oc edit template glusterblock-provisioner
    - displayName: glusterblock provisioner container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-gluster-block-prov-rhel7
    - displayName: glusterblock provisioner container image version
      name: IMAGE_VERSION
      required: true
      value: v3.10
    - description: The namespace in which these resources are being created
      displayName: glusterblock provisioner namespace
      name: NAMESPACE
      required: true
      value: glusterfs
    - description: A unique name to identify which heketi service manages this cluster,
        useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
    If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:
    # oc edit template glusterblock-provisioner
    - displayName: glusterblock provisioner container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10
    - description: The namespace in which these resources are being created
      displayName: glusterblock provisioner namespace
      name: NAMESPACE
      required: true
      value: glusterfs
    - description: A unique name to identify which heketi service manages this cluster,
        useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
  7. Delete the following resources from the old pod
    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    Copy to Clipboard Toggle word wrap
    # oc delete serviceaccounts glusterblock-registry-provisioner
    Copy to Clipboard Toggle word wrap
  8. Execute the following command to create a glusterblock-provisioner.
    # oc process <gluster_block_provisioner_template> | oc create -f -
    Copy to Clipboard Toggle word wrap

7.2.5. Enabling S3 Compatible Object store

Support for S3 compatible Object Store is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#S3_Object_Store.

Note

If you have gluster nodes and heketi pods in glusterfs registry namespace, then follow the steps in section Section 7.3, “Upgrading Gluster Nodes and heketi pods in glusterfs Registry Namespace”.
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat