Questo contenuto non è disponibile nella lingua selezionata.

13.2. Memory Management Considerations


Here is some information about settings related to memory management.
The JBoss Data Virtualization engine uses batching to reduce the number of memory rows processed at a given time. The batch sizes may be adjusted to larger values as more clients will be accessing the server simultaneously.
buffer-service-max-reserve-kb
Default is -1. This setting determines the total size in kilobytes of batches that can be held by the buffer manager in memory. This number does not account for persistent batches held by soft (such as index pages) or weak references. The default value of -1 will automatically calculate a typical maximum based on the maximum heap available to the VM. The calculated value assumes a 64bit architecture and will limit buffer usage to 50% of the first gigabyte of memory beyond the first 300 megabytes (which are assumed for use by JBoss EAP) and 75% of the memory beyond that.
For example, with default settings and an 8GB VM size, then buffer-service-max-reserve-kb will at a maximum use: (((1024-300) * 0.5) + (7 * 1024 * 0.75)) = 5738 MB or 5875712 KB.
The buffer manager automatically triggers the use of a canonical value cache if enabled when more than 25% of the buffer-service-max-reserve-kb is in use. This can dramatically cut memory usage in situations where similar value sets are being read by JBoss Data Virtualization, but does introduce a lookup cost. If you are processing small or highly similar datasets using JBoss Data Virtualization, and wish to conserve memory, you should consider enabling value caching.

Note

Memory consumption can be significantly more or less than the nominal target depending on actual column values and whether value caching is enabled. Large non built-in type objects can exceed their default size estimate. If out of memory errors occur, lower the buffer-service-max-reserve-kb value. Also note that source LOB values are held by memory references that are not cleared when a batch is persisted. With heavy LOB usage you should ensure that buffers of other memory associated with LOB references are appropiately sized.
buffer-service-max-processing-kb
Default is -1. This setting determines the total size in kilobytes of batches that can be guaranteed for use by one active plan and may be in addition to the memory held based on buffer-service-max-reserve-kb. Typical minimum memory required by JBoss Data Virtualization when all the active plans are active is #(active-plans) * buffer-service-max-processing-kb. The default value of -1 will automatically calculate a typical maximum based upon the maximum heap available to the VM and max active plans. The calculated value assumes a 64bit architecture and will limit processing batch usage to 10% of memory beyond the first 300 megabytes (which are assumed for use by JBoss EAP).
For example, with default settings including 20 active-plans and an 8GB VM size, then buffer-service-max-processing-kb will be: (((1024-300) * 0.1) + (7 * 1024 * 0.1))/20 = 789.2 MB/20 = 39.46 MB or 40407 KB per plan. This implies a range between 0 and 789 MB that may be reserved with roughly 40 MB per plan.
In systems where large intermediate results are expected you can consider increasing buffer-service-max-processing-kb and decreasing buffer-service-max-reserve-kb so that each request has access to an effectively smaller buffer space.
buffer-service-max-file-size
Default is 2GB. Each intermediate result buffer, temporary LOB, and temporary table is stored in its own set of buffer files, where an individual file is limited to buffer-service-max-file-size megabytes. Consider increasing the storage space available to all such files using buffer-service-max-buffer-space if your installation makes use of internal materialization, makes heavy use of SQL/XML, or processes large row counts.
TEIID_AUDIT_LOGGING and TEIID_COMMAND_LOGGING
The logging queue-length for TEIID_AUDIT_LOGGING and TEIID_COMMAND_LOGGING should be increased. As a minimum, you must set it to the expected concurrency level for that node. In other words, if you expect 1,000 users to use the system concurrently then set each of the queue-lengths to 1k and then tune it from there.
Limitations

It is also important to keep in mind that Teiid has memory and other hard limits which breaks down along several lines in terms of # of storage objects tracked, disk storage, streaming data size/row limits, and so forth.

The buffer manager has a max addressable space of 16 terabytes - but due to fragmentation you'd expect that the max usable would be less. This is the maximum amount of storage available to Teiid for all temporary lobs, internal tables, intermediate results, etc.
The max size of an object (batch or table page) that can be serialized by the buffer manager is 32 GB - but you should not get near that (the default limit is 8 MB). A batch is set or rows that are flowing through Teiid engine. Teiid temporary tables (also used for internal materialization) can only support 2^31-1 rows per table.
However handling a source that has tera/petabytes of data doesn't by itself impact Teiid in any way. What matters is the processing operations that are being performed and/or how much of that data do we need to store on a temporary basis in Teiid. With a simple forward-only query, as long as the result row count is less than 2^31, Teiid be perfectly happy to return a petabyte of data.
Other Considerations for Sizing

Each batch/table page requires an in memory cache entry of approximately ~ 128 bytes - thus the total tracked max batches are limited by the heap and is also why we recommend to increase the processing batch size on larger memory or scenarios making use of large internal materializations. The actual batch/table itself is managed by buffer manager, which has layered memory buffer structure with spill over facility to disk.

Using internal materialization is based on the buffermanager. Buffermanager settings may need to be updated based upon the desired amount of internal materialization performed by deployed VDBs.
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat