このコンテンツは選択した言語では利用できません。

Performance Tuning Guide


Red Hat JBoss Data Grid 7.0

For use with Red Hat JBoss Data Grid 7.0

Misha Husnain Ali

Red Hat Engineering Content Services

Christian Huffman

Red Hat Engineering Content Services

Abstract

This guide presents information about the administration and configuration of Red Hat JBoss Data Grid 7.0

Chapter 1. Introduction

Red Hat JBoss Data Grid is a distributed in-memory data grid, which provides the following capabilities:
  • Schemaless key-value store – JBoss Data Grid is a NoSQL database that provides the flexibility to store different objects without a fixed data model.
  • Grid-based data storage – JBoss Data Grid is designed to easily replicate data across multiple nodes.
  • Elastic scaling – Adding and removing nodes is simple and non-disruptive.
  • Multiple access protocols – It is easy to access the data grid using REST, Memcached, Hot Rod, or simple map-like API.

1.1. Supported Configurations

The set of supported features, configurations, and integrations for Red Hat JBoss Data Grid (current and past versions) are available at the Supported Configurations page at https://access.redhat.com/articles/115883.

1.2. Components and Versions

Red Hat JBoss Data Grid includes many components for Library and Remote Client-Server modes. A comprehensive (and up to date) list of components included in each of these usage modes and their versions is available in the Red Hat JBoss Data Grid Component Details page at https://access.redhat.com/articles/488833

1.3. About Performance Tuning in Red Hat JBoss Data Grid

The Red Hat JBoss Data Grid Performance Tuning Guide provides information about optimizing and configuring specific elements within the product in an attempt to improve the JBoss Data Grid implementation performance.
Due to each business case being different it is not possible to provide a "one size fits all" approach to tuning. Instead, this guide attempts to present various elements that have proven to be effective in increasing performance, along with potential values to begin testing for a user's specific case. It is imperative that after each individual change performance be measured once again to isolate any improvements or negative effects; this approach allows a methodical approach to establishing a baseline of parameters.
When testing it is strongly recommended to use a workload that closely mirrors the expected production load. Using any other workload may result in performance differences between the testing and production environments.

Chapter 2. Java Virtual Machine Settings

Tuning a Java Virtual Machine (JVM) is a complex task because of the number of configuration options and changes with each new release.
The recommended approach for performance tuning Java Virtual Machines is to use as simple a configuration as possible and retain only the tuning that is beneficial, rather than all tweaks. A collection of tested configurations for various heap sizes are provided after the parameters are discussed.
Heap Size

The JVM's heap size determines how much memory is allowed for the application to consume, and is controlled by the following parameters:

  • -Xms - Defines the minimum heap size allowed.
  • -Xmx - Defines the maximum heap size allowed.
  • -XX:NewRatio - Define the ratio between young and old generations. Should not be used if -Xmn is enabled.
  • -Xmn - Defines the minimum and maximum value for the young generation.

In the majority of instances Xms and Xmx should be identical to prevent dynamic resizing of the heap, which will result in longer garbage collection periods.
Garbage Collection

The choice of garbage collection algorithm to use will largely be determined if throughput is valued over minimizing the amount of the time the JVM is fully paused. As JBoss Data Grid applications are often clustered it is recommended to choose a low pause collector to prevent network timeouts. The following parameters assume the CMS (Concurrent Mark-Sweep) collector is chosen:

  • -XX:+UseConcMarkSweepGC - Enables usage of the CMS collector.
  • -XX:+CMSClassUnloadingEnabled - Allows class unloading when the CMS collector is enabled.
  • -XX:+UseParNewGC - Utilize a parallel collector for the young generation. This parameter minimizes pausing by using multiple collection threads in parallel.
  • -XX:+DisableExplicitGC - Prevent explicit garbage collections.

Large Pages

Large, or Huge, Pages are contiguous pages of memory that are much larger than what is typically defined at the OS level. By utilizing large pages the JVM will have access to memory that is much more efficiently referenced, and memory that may not be swapped out, resulting in a more consistent behavior from the JVM. Large pages are discussed in further detail at Section 3.1, “About Page Memory”.

  • -XX:+UseLargePages - Instructs the JVM to allocate memory in Large Pages. These pages must be configured at the OS level for this parameter to function successfully.

Server Configuration

This parameter relates to JIT (Just-In-Time) compilation, which requires extended loading times during startup, but provides extensive compilation and optimization benefits after the startup process completes.

  • -server - Enables server mode for the JVM.

2.1. Memory Requirements

Minimum Requirements

The default minimum amount of memory required to run JBoss Data Grid varies based on the configuration in use:

  • standalone.conf - The server should have a minimum of 2 GB of RAM for a single JBoss Data Grid instance, as the default heap may grow up to 1.3 GB, and Metaspace may occupy up to 256 MB of memory.
  • domain.conf - The server should have a minimum of 2.5 GB of RAM for a single JBoss Data Grid managed domain consisting of two JBoss Data Grid server instances, as the heap may grow up to 512 GB for the domain controller, the heap for each server instance may grow up to 256 MB, and the Metaspace may occupy up to 256 MB of memory for the domain controller and each server instance.

Recommended Memory Requirements

There is no official memory recommendation for JBoss Data Grid, as the memory requirements will vary depending on the application and workload in use. As the heap is increased more data may be stored in the grid.

It is strongly recommended to test each application, measuring throughput and collection times to determine if they are acceptable for the application in question.
Physical Memory Requirements

Each JVM process has a memory footprint that adheres to the following formula:

JvmProcessMemory = JvmHeap + Metaspace + (ThreadStackSize * Number of Threads) + Jvm-native-c++-heap
Copy to Clipboard Toggle word wrap

Adjusting these values are discussed in Chapter 2, Java Virtual Machine Settings.
The Jvm-native-c++-heap will vary based on the native threads and if any native libraries are used; however, for a default installation it is safe to assume this will use no more than 256 MB of memory.

2.2. JVM Example Configurations

The following configurations have been tested internally, and are provided as a baseline for customization. These configurations show various heap sizes, which allows users to find one appropriate for their environment to begin testing:

Example 2.1. 8GB JVM

-server
-Xms8192m
-Xmx8192m
-XX:+UseLargePages           
-XX:NewRatio=3
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:+DisableExplicitGC
Copy to Clipboard Toggle word wrap

Example 2.2. 32GB JVM

-server
-Xmx32G
-Xms32G
-Xmn8G
-XX:+UseLargePages
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:+DisableExplicitGC
Copy to Clipboard Toggle word wrap

Example 2.3. 64GB JVM

-server
-Xmx64G
-Xms64G
-Xmn16G
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
-XX:+UseLargePages
-XX:+DisableExplicitGC
Copy to Clipboard Toggle word wrap

Chapter 3. Configure Page Memory

3.1. About Page Memory

A memory page is a fixed size, continuous block of memory and is used when transferring data from one storage medium to another, and to allocate memory. In some architectures, larger sized pages are available for improved memory allocation. These pages are known as large (or huge) pages.
The default memory page size in most operating systems is 4 kilobytes (kb). For a 32-bit operating system the maximum amount of memory is 4 GB, which equates to 1,048,576 memory pages. A 64-bit operating system can address 18 Exabytes of memory (in theory), resulting in a very large number of memory pages. The overhead of managing such a large number of memory pages is significant, regardless of the operating system.
Large memory pages are pages of memory which are significantly larger than 4 kb (usually 2 Mb). In some instances it is configurable from 2 MB to 2 GB, depending on the CPU architecture.
Large memory pages are locked in memory, and cannot be swapped to disk like normal memory pages. The advantage for this is if the heap is using large page memory it can not be paged or swapped to disk so it is always readily available. For Linux, the disadvantage is that applications must attach to it using the correct flag for the shmget() system call. Additionally, the proper security permissions are required for the memlock() system call. For any application that does not have the ability to use large page memory, the server behaves as if the large page memory does not exist, which can be a problem.
For additional information on page size refer to the Red Hat Enterprise Linux Configuring Hugetlb Huge Pages.

3.2. Configure Page Memory

Page memory configuration to optimize Red Hat JBoss Data Grid's performance must be implement at the operating system level and at the JVM level. The provided instructions are for the Red Hat Enterprise Linux operating system. Use both the operating system level and JVM level instructions for optimal performance.

Procedure 3.1. Configure Page Memory for Red Hat Enterprise Linux

  1. Set the Shared Memory Segment Size

    As root, set the maximum size of a shared memory segment in bytes; below we define this to be 32 GB:
    # echo "kernel.shmmax = 34359738368" >> /etc/sysctl.conf
    Copy to Clipboard Toggle word wrap
  2. Set the Huge Pages

    The number of huge pages is set to the total amount of memory the JVM will consume (heap, meta space, thread stacks, native code) divided by the Hugepagesize. In Red Hat Enterprise Linux systems Hugepagesize is set to 2048 MB.
    1. The number of huge pages required can be determined by the following formula:
      Heap + Meta space + Native JVM Memory + (Number of Threads * Thread Stack Size)
      Copy to Clipboard Toggle word wrap
    2. Assuming a JVM with a 32 GB Heap, 2 GB of Meta space, a 512 MB native footprint, and 500 threads, each with a default size of 1 MB per thread, we have the following equation.
      32*(1024*1024*1024) + 2*(1024*1024*1024) + 512*(1024*1024) + (500 * 1024*1024)
      Copy to Clipboard Toggle word wrap
    3. The resulting value can now be converted to hugepages. Since there are 2048 MB in a single hugepage we perform the following:
      37568380928 / (2*1024*1024)
      Copy to Clipboard Toggle word wrap
    As root, set the number of huge pages determined from the previous steps to be allocated to the operating system:
    # echo "vm.nr_hugepages = 17914" >> /etc/sysctl.conf
    Copy to Clipboard Toggle word wrap
  3. Assign Shared Memory Segment Permissions

    As root, set the ID of the user group that is allowed to create shared memory segments using the hugetlb_shm_group file. This value should match the group id of the user running the JVM:
    # echo "vm.hugetlb_shm_group = 500" >> /etc/sysctl.conf
    Copy to Clipboard Toggle word wrap
  4. Update the Resource Limits

    To allow a user to lock the required amount of memory, update the resource limits in the /etc/security/limits.conf file by adding the following:
    jboss      soft   memlock      unlimited
    jboss      hard   memlock      unlimited
    Copy to Clipboard Toggle word wrap
    This change allows the user jboss to lock the system's available memory.
  5. Configure Authentication using PAM

    Linux's PAM handles authentication for applications and services. Ensure that the configured system resource limits apply when using su and sudo as follows:
    1. Configure PAM for su

      Add the following line to the /etc/pam.d/su file:
      session    required   pam_limits.so
      Copy to Clipboard Toggle word wrap
    2. Configure PAM for sudo

      Add the following line to the /etc/pam.d/sudo file:
      session    required   pam_limits.so
      Copy to Clipboard Toggle word wrap
  6. Reboot the system for the changes to take effect. Since Huge Pages allocate a contiguous block of memory these must be allocated at system boot; attempts to claim these dynamically while the system is running may result in system hangs if the memory is unable to be reclaimed.

Procedure 3.2. Configure Page Memory for the JVM

  1. Set the Heap Size

    Use the -Xms and -Xmx parameters to set the minumum and maximum heap sizes for your JVM, as discussed in Chapter 2, Java Virtual Machine Settings.
  2. Enable Large Pages

    Enabled large pages for the JVM by adding the following parameter, as discussed in Chapter 2, Java Virtual Machine Settings:
    -XX:+UseLargePages
    Copy to Clipboard Toggle word wrap

Chapter 4. Networking Configuration

4.1. TCP Settings

The Transmission Control Protocol (TCP) is a core part of the Internet Protocol (IP) communication protocol suite. Computers use the TCP/IP protocol to communicate with each other when using the Internet.

4.1.1. Adjusting TCP Send/Receive Window Settings

The operating system is a deciding factor in determining the maximum size of the TCP Send and Receive window.

Procedure 4.1. Set the TCP Send and Receive Windows

For Red Hat Enterprise Linux, use the recommended settings to configure the send and receive windows as follows:
  1. Adjust the Send and Receive Window Sizes

    Adjust the size of the send and receive windows by adding the following lines to the /etc/sysctl.conf file as root:
    1. Add the following line to set the send window size to the recommended value (640 KB):
      net.core.wmem_max=655360
      Copy to Clipboard Toggle word wrap
    2. Add the following line to set the receive window size to the recommended value (25 MB):
      net.core.rmem_max=26214400
      Copy to Clipboard Toggle word wrap
  2. Apply Changes Immediately

    Optionally, to load the new values into a running kernel (without a reboot), enter the following command as root:
    # sysctl -p
    Copy to Clipboard Toggle word wrap
    If the user reboots after the first step, the second step is unnecessary.

Chapter 5. Return Values

5.1. About Return Values

Values returned by cache operations are referred to as return values. In Red Hat JBoss Data Grid, these return values remain reliable irrespective of which cache mode is employed and whether synchronous or asynchronous communication is used.

5.2. Disabling Return Values

As a default in Red Hat JBoss Data Grid, the put() and remove() API operations return the original or previous values in the cache. However, if the original value was not required, this operation is wasteful.
To conserve the resources used, disable the return values. Note that this solution is only applicable for cache instances that only perform write operations (for example put() and remove()). Implement this solution as follows:

Procedure 5.1. Disable Return Values

  1. Set the IGNORE_RETURN_VALUES flag. This flag signals that the operation's return value is ignored. For example:
    cache.getAdvancedCache().withFlags(Flag.IGNORE_RETURN_VALUES)
    Copy to Clipboard Toggle word wrap
  2. Set the SKIP_CACHE_LOAD flag. This flag does not load entries from any configured CacheStores. For example:
    cache.getAdvancedCache().withFlags(Flag.SKIP_CACHE_LOAD)
    Copy to Clipboard Toggle word wrap

Chapter 6. Marshalling

Marshalling is the process of converting Java objects into a format that is transferable over the wire. Unmarshalling is the reversal of this process where data read from a wire format is converted into Java objects.
Red Hat JBoss Data Grid uses marshalling and unmarshalling to:
  • transform data for relay to other JBoss Data Grid nodes within the cluster.
  • transform data to be stored in underlying cache stores.

6.1. About the JBoss Marshalling Framework

Red Hat JBoss Data Grid uses the JBoss Marshalling Framework to marshall and unmarshall Java POJOs. Using the JBoss Marshalling Framework offers a significant performance benefit, and is therefore used instead of Java Serialization. Additionally, the JBoss Marshalling Framework can efficiently marshall Java POJOs, including Java classes.
The Java Marshalling Framework uses high performance java.io.ObjectOutput and java.io.ObjectInput implementations compared to the standard java.io.ObjectOutputStream and java.io.ObjectInputStream.

6.2. Customizing Marshalling

Instead of using the default Marshaller, which may be slow with payloads that are unnecessarily large, objects may implement java.io.Externalizable so that a custom method of marshalling/unmarshalling classes is performed. With this approach the target class may be created in a variety of ways (direct instantiation, factory methods, reflection, etc.) and the developer has complete control over using the provided stream.
Implementing a Custom Externalizer

To configure a class for custom marshalling an implementation of org.infinispan.marshall.AdvancedExternalizer must be provided. Typically this is performed in a static inner class, as seen in the below externalizer for a Book class:

import org.infinispan.marshall.AdvancedExternalizer;

public class Book {

   final String name;
   final String author;

   public Book(String name, String author) {
      this.name = name;
      this.author = author;
   }

   public static class BookExternalizer implements AdvancedExternalizer<Book> {
      @Override
      public void writeObject(ObjectOutput output, Book book)
            throws IOException {
         output.writeObject(book.name);
         output.writeObject(book.author);
      }

      @Override
      public Person readObject(ObjectInput input)
            throws IOException, ClassNotFoundException {
         return new Person((String) input.readObject(), (String) input.readObject());
      }

      @Override
      public Set<Class<? extends Book>> getTypeClasses() {
         return Util.<Class<? extends Book>>asSet(Book.class);
      }

      @Override
      public Integer getId() {
         return 2345;
      }
   }
}
Copy to Clipboard Toggle word wrap

Once the writeObject() and readObject() methods have been implemented the Externalizer may be linked up with the classes they externalize; this is accomplished with the getTypeClasses() method seen in the above example.
In addition, a positive identifier must be defined as seen in the getId() method above. This value is used to identify the Externalizer at runtime. A list of values used by JBoss Data Grid, which should be avoided in custom Externalizer implementations, may be found at Section 6.3, “JBoss Data Grid Externalizer IDs”.
Registering Custom Marshallers

Custom Marshallers may be registered with JBoss Data Grid programmatically or declaratively, as seen in the following examples:

Example 6.1. Declaratively Register a Custom Marshaller

<cache-container>
  <serialization>
    <advanced-externalizer class="Book$BookExternalizer"/>
  </serialization>
</cache-container>
Copy to Clipboard Toggle word wrap

Example 6.2. Programmatically Register a Custom Marshaller

GlobalConfigurationBuilder builder = ...
builder.serialization()
   .addAdvancedExternalizer(new Book.BookExternalizer());
Copy to Clipboard Toggle word wrap

6.3. JBoss Data Grid Externalizer IDs

The following values are used as Externalizer IDs inside the Infinispan based modules or frameworks, and should be avoided while implementing custom marshallers.
Expand
Table 6.1. JBoss Data Grid Externalizer IDs
Module Name ID Range
Infinispan Tree Module 1000-1099
Infinispan Server Modules 1100-1199
Hibernate Infinispan Second Level Cache 1200-1299
Infinispan Lucene Directory 1300-1399
Hibernate OGM 1400-1499
Hibernate Search 1500-1599
Infinispan Query Module 1600-1699
Infinispan Remote Query Module 1700-1799
Infinispan Scripting Module 1800-1849
Infinispan Server Event Logger Module 1850-1899
Infinispan Remote Store 1900-1999

Chapter 7. JMX

7.1. About Java Management Extensions (JMX)

Java Management Extension (JMX) is a Java based technology that provides tools to manage and monitor applications, devices, system objects, and service oriented networks. Each of these objects is managed, and monitored by MBeans.
JMX is the de facto standard for middleware management and administration. As a result, JMX is used in Red Hat JBoss Data Grid to expose management and statistical information.

7.2. Using JMX with Red Hat JBoss Data Grid

Management in Red Hat JBoss Data Grid instances aims to expose as much relevant statistical information as possible. This information allows administrators to view the state of each instance. While a single installation can comprise of tens or hundreds of such instances, it is essential to expose and present the statistical information for each of them in a clear and concise manner.
In JBoss Data Grid, JMX is used in conjunction with JBoss Operations Network (JON) to expose this information and present it in an orderly and relevant manner to the administrator.

7.3. Enabling JMX with Red Hat JBoss Data Grid

By default JMX is enabled locally on each JBoss Data Grid server, and no further configuration is necessary to connect via JConsole, VisualVM, or other JMX clients that are launched from the same system.
To enable remote connections it is necessary to define a port for the JMX remote agent to listen on. When using OpenJDK this behavior is defined with the com.sun.management.jmxremote.port parameter. In addition, it is recommended to secure the remote connection when this is used in a production environment.

Example 7.1. Enable JMX for Remote Connections using the OpenJDK

This example assumes that a SSL keystore, entitled keystore has already been created, and will configure a standalone instance to accept incoming connections on port 3333 while using the created keystore.
## Default configuration, 1.3GB heap
JAVA_OPTS="-server -Xms1303m -Xmx1303m -XX:MetaspaceSize=96m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"

## Add the JMX configuration
JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.port=3333"
JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.ssl=true"
JAVA_OPTS="$JAVA_OPTS -Djavax.net.ssl.keystore=keystore"
JAVA_OPTS="$JAVA_OPTS -Djavax.net.ssl.keystorePassword=password"
Copy to Clipboard Toggle word wrap
As JMX behavior is configured through JVM arguments, refer to the JDK vendor's documentation for a full list of parameters and configuration examples.

Chapter 8. Hot Rod Server

8.1. About Hot Rod

Hot Rod is a binary TCP client-server protocol used in Red Hat JBoss Data Grid. It was created to overcome deficiencies in other client/server protocols, such as Memcached.
Hot Rod will failover on a server cluster that undergoes a topology change. Hot Rod achieves this by providing regular updates to clients about the cluster topology.
Hot Rod enables clients to do smart routing of requests in partitioned or distributed JBoss Data Grid server clusters. To do this, Hot Rod allows clients to determine the partition that houses a key and then communicate directly with the server that has the key. This functionality relies on Hot Rod updating the cluster topology with clients, and that the clients use the same consistent hash algorithm as the servers.
JBoss Data Grid contains a server module that implements the Hot Rod protocol. The Hot Rod protocol facilitates faster client and server interactions in comparison to other text-based protocols and allows clients to make decisions about load balancing, failover and data location operations.

8.2. About Hot Rod Servers in Red Hat JBoss Data Grid

Red Hat JBoss Data Grid contains a server module that implements the Hot Rod protocol. The Hot Rod protocol facilitates faster client and server interactions in comparison to other text based protocols and allows clients to make decisions about load balancing, failover and data location operations.

8.3. Worker Threads in the Hot Rod Server

8.3.1. About Worker Threads

Worker threads, unlike system threads, are threads activated by a client's request and do not interact with the user. As the server is asynchronous client requests will continue to be received after this limit is hit; instead, this number represents the number of active threads performing simultaneous operations, typically writes.
In Red Hat JBoss Data Grid, worker threads are used as part of the configurations for the REST, Memcached and Hot Rod interfaces.

8.3.2. Change Number of Worker Threads

In Red Hat JBoss Data Grid, the default number of worker threads for all connectors is 160, and may be changed . The number of worker threads may be specified as an attribute on each interface, as seen in the following example:
<hotrod-connector socket-binding="hotrod" cache-container="local" worker-threads="200">
    <!-- Additional configuration here -->
</hotrod-connector>
Copy to Clipboard Toggle word wrap

Appendix A. Revision History

Revision History
Revision 7.0-1Wed 20 July 2016Christian Huffman
Adjusted section titles.
Revision 7.0-0Tue 31 May 2016Misha Husnain Ali, Rakesh Ghatvisave, Christian Huffman
BZ-841709: Created draft, adding content.
BZ-841709: Adding performance tuning information for JVMs.
BZ-1009721: Added startup script for JBoss Data Grid Remote Client-Sever mode under 1.2.1. Remote Client-Server Mode."
BZ-10116203: Added a components and versions topic in the introductory chapter.
Updated huge pages section to include formula for custom values.
Included descriptions of recommended JVM parameters.

Legal Notice

Copyright © 2016 Red Hat, Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る