Chapter 3. TerraTorch configuration options for geospatial model serving
Use the Red Hat AI Inference Server server arguments when starting AI Inference Server with the TerraTorch backend for geospatial model serving.
| Argument | Description |
|---|---|
|
| Skips tokenizer initialization. Vision models do not require a tokenizer. |
|
| Disables CUDA graph optimization for compatibility with geospatial model architectures. |
|
| Specifies the I/O processor plugin for segmentation tasks. |
|
| Enables multimodal embeddings for processing geospatial imagery. |
Geospatial model serving with TerraTorch exposes the /pooling POST API endpoint for geospatial imagery inference requests.
Example request payload