Chapter 50. Getting started with TIPC


Transparent Inter-process Communication (TIPC), which is also known as Cluster Domain Sockets, is an Inter-process Communication (IPC) service for cluster-wide operation.

Applications that are running in a high-available and dynamic cluster environment have special needs. The number of nodes in a cluster can vary, routers can fail, and, due to load balancing considerations, functionality can be moved to different nodes in the cluster. TIPC minimizes the effort by application developers to deal with such situations, and maximizes the chance that they are handled in a correct and optimal way. Additionally, TIPC provides a more efficient and fault-tolerant communication than general protocols, such as TCP.

50.1. The architecture of TIPC

TIPC is a layer between applications using TIPC and a packet transport service (bearer), and spans the level of transport, network, and signaling link layers. However, TIPC can use a different transport protocol as bearer, so that, for example, a TCP connection can serve as a bearer for a TIPC signaling link.

TIPC supports the following bearers:

  • Ethernet
  • InfiniBand
  • UDP protocol

TIPC provides a reliable transfer of messages between TIPC ports, that are the endpoints of all TIPC communication.

The following is a diagram of the TIPC architecture:

TIPC architectural overview

50.2. Loading the tipc module when the system boots

Before you can use the TIPC protocol, you must load the tipc kernel module. You can configure Red Hat Enterprise Linux to automatically load this kernel module automatically when the system boots.

Procedure

  1. Create the /etc/modules-load.d/tipc.conf file with the following content:

    tipc
    Copy to Clipboard
  2. Restart the systemd-modules-load service to load the module without rebooting the system:

    # systemctl start systemd-modules-load
    Copy to Clipboard

Verification

  • Use the following command to verify that RHEL loaded the tipc module:

    # lsmod | grep tipc
    tipc    311296  0
    Copy to Clipboard

    If the command shows no entry for the tipc module, RHEL failed to load it.

50.3. Creating a TIPC network

To create a TIPC network, perform this procedure on each host that should join the TIPC network.

Important

The commands configure the TIPC network only temporarily. To permanently configure TIPC on a node, use the commands of this procedure in a script, and configure RHEL to execute that script when the system boots.

Prerequisites

Procedure

  1. Optional: Set a unique node identity, such as a UUID or the node’s host name:

    # tipc node set identity host_name
    Copy to Clipboard

    The identity can be any unique string consisting of a maximum 16 letters and numbers.

    You cannot set or change an identity after this step.

  2. Add a bearer. For example, to use Ethernet as media and enp0s1 device as physical bearer device, enter:

    # tipc bearer enable media eth device enp1s0
    Copy to Clipboard
  3. Optional: For redundancy and better performance, attach further bearers using the command from the previous step. You can configure up to three bearers, but not more than two on the same media.
  4. Repeat all previous steps on each node that should join the TIPC network.

Verification

  1. Display the link status for cluster members:

    # tipc link list
    broadcast-link: up
    5254006b74be:enp1s0-525400df55d1:enp1s0: up
    Copy to Clipboard

    This output indicates that the link between bearer enp1s0 on node 5254006b74be and bearer enp1s0 on node 525400df55d1 is up.

  2. Display the TIPC publishing table:

    # tipc nametable show
    Type       Lower      Upper      Scope    Port       Node
    0          1795222054 1795222054 cluster  0          5254006b74be
    0          3741353223 3741353223 cluster  0          525400df55d1
    1          1          1          node     2399405586 5254006b74be
    2          3741353223 3741353223 node     0          5254006b74be
    Copy to Clipboard
    • The two entries with service type 0 indicate that two nodes are members of this cluster.
    • The entry with service type 1 represents the built-in topology service tracking service.
    • The entry with service type 2 displays the link as seen from the issuing node. The range limit 3741353223 represents the peer endpoint’s address (a unique 32-bit hash value based on the node identity) in decimal format.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat