Chapter 3. TerraTorch configuration options for geospatial model serving


Use the Red Hat AI Inference Server server arguments when starting AI Inference Server with the TerraTorch backend for geospatial model serving.

Expand
Table 3.1. Required Red Hat AI Inference Server server arguments for TerraTorch
ArgumentDescription

--skip-tokenizer-init

Skips tokenizer initialization. Vision models do not require a tokenizer.

--enforce-eager

Disables CUDA graph optimization for compatibility with geospatial model architectures.

--io-processor-plugin terratorch_segmentation

Specifies the I/O processor plugin for segmentation tasks.

--enable-mm-embeds

Enables multimodal embeddings for processing geospatial imagery.

Geospatial model serving with TerraTorch exposes the /pooling POST API endpoint for geospatial imagery inference requests.

Example request payload

{
  "model": "ibm-nasa-geospatial/Prithvi-EO-2.0-300M-TL-Sen1Floods11",
  "data": {
    "data": "https://huggingface.co/ibm-nasa-geospatial/Prithvi-EO-2.0-300M-TL-Sen1Floods11/resolve/main/examples/India_900498_S2Hand.tif",
    "data_format": "url",
    "image_format": "tiff",
    "out_data_format": "b64_json"
  },
  "priority": 0
}
Copy to Clipboard Toggle word wrap

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top