Red Hat AMQ 6
As of February 2025, Red Hat is no longer supporting Red Hat AMQ 6. If you are using AMQ 6, please upgrade: Migrating to AMQ 7.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
10.4. Alternative Master-Slave Cluster
Why use an alternative master-slave cluster?
Link kopierenLink in die Zwischenablage kopiert!
				The standard master-slave cluster in Fabric uses Apache Zookeeper to manage the locking mechanism: in order to be promoted to master, a broker connects to a Fabric server and attempts to acquire the lock on a particular entry in the Zookeeper registry. If the master broker loses connectivity to the Fabric ensemble, it automatically becomes dormant (and ceases to accept incoming messages). A potentially undesirable side effect of this behaviour is that when you perform maintenance on the Fabric ensemble (for example, by shutting down one of the Fabric servers), you will find that the broker cluster shuts down as well.
			
				In some deployment scenarios, therefore, you might get better up times and more reliable broker performance by disabling the Zookeeper locking mechanism (which Fabric employs by default) and using an alternative locking mechanism instead.
			
Alternative locking mechanism
Link kopierenLink in die Zwischenablage kopiert!
				The Apache ActiveMQ persistence layer supports alternative locking mechanisms which can be used to enable a master-slave broker cluster. In order to use an alternative locking mechanism, you need to make at least the following basic configuration changes:
			
- Disable the default Zookeeper locking mechanism (which can be done by settingstandalone=truein the broker'sio.fabric8.mq.fabric.server-BrokerNamePID).
- Enable the shared file system master/slave locking mechanism in the KahaDB persistence layer (see section "Shared File System Master/Slave" in "Fault Tolerant Messaging").
Note
					In fact, the KahaDB locking mechanism is usually enabled by default. This does not cause any problems with Fabric, because it operates at a completely different level from the Zookeeper locking mechanism. The Zookeeper coordination and locking works at the broker level to coordinate the broker start. The KahaDB lock coordinates the persistence adapter start.
				
standalone property
Link kopierenLink in die Zwischenablage kopiert!
				The 
standalone property belongs to the io.fabric8.mq.fabric.server-BrokerName PID and is normally used for a non-Fabric broker deployment (for example, it is set to true in the etc/io.fabric8.mq.fabric.server-broker.cfg file). By setting this property to true, you instruct the broker to stop using the discovery and coordination services provided by Fabric (but it is still possible to deploy the broker in a Fabric container). One consequence of this is that the broker stops using the Zookeeper locking mechanism. But this setting has other side effects as well.
			Side effects of setting standalone=true
Link kopierenLink in die Zwischenablage kopiert!
				Setting the property, 
standalone=true, on a broker deployed in Fabric has the following effects:
			- Fabric no longer coordinates the locks for the brokers (hence, the broker's persistence adapter needs to be configured as shared file system master/slave instead).
- The broker no longer uses theZookeeperLoginModulefor authentication and falls back to using thePropertiesLoginModuleinstead. This requires users to be stored in theetc/users.propertiesfile or added to thePropertiesLoginModuleJAAS Realm in the container where the broker is running for the brokers to continue to accept connections
- Fabric discovery of brokers no longer works (which affects client configuration).
Configuring brokers in the cluster
Link kopierenLink in die Zwischenablage kopiert!
				Brokers in the cluster must be configured as follows:
			
- Set the property,standalone=true, in each broker'sio.fabric8.mq.fabric.server-BrokerNamePID. For example, given a broker with the broker name,brokerx, which is configured by the profile,mq-broker-default.brokerx, you could set thestandaloneproperty totrueusing the following console command:profile-edit --pid io.fabric8.mq.fabric.server-brokerx/standalone=true mq-broker-default.brokerx profile-edit --pid io.fabric8.mq.fabric.server-brokerx/standalone=true mq-broker-default.brokerxCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
- To customize the broker's configuration settings further, you need to create a unique copy of the broker configuration file in the broker's own profile (instead of inheriting the broker configuration file from the base profile,mq-base). If you have not already done so, follow the instructions in the section called “Customizing the broker configuration file” to create a custom broker configuration file for each of the broker's in the cluster.
- Configure each broker's KahaDB persistence adapter to use the shared file system locking mechanism. For this you must customize each broker configuration file, adding or modifying (as appropriate) the following XML snippet:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can edit this profile resource either though the Fuse Management Console, through the Git configuration approach (see Section 10.5, “Broker Configuration”), or using thefabric:profile-editcommand.
Note
					For more details about configuring brokers, see Section 10.5, “Broker Configuration”.
				
Configuring authentication data
Link kopierenLink in die Zwischenablage kopiert!
				When you set 
standalone=true on a broker, it can no longer use the default ZookeeperLoginModule authentication mechanism and falls back on the PropertiesLoginModule. This implies that you must populate authentication data in the etc/users.properties file on each of the hosts where a broker is running. Each line of this file takes an entry in the following format:
			Username=Password,Role1,Role2,...
Username=Password,Role1,Role2,...
				Where each entry consists of 
Username and Password credentials and a list of one or more roles, Role1, Role2,....
			Important
					Using such a decentralized approach to authentication in a distributed system such as Fabric is potentially problematic. For example, if you move a broker from one host to another, the authentication data would not automatically become available on the new host. You should, therefore, carefully consider the impact this might have on your administrative procedures.
				
Configuring a client
Link kopierenLink in die Zwischenablage kopiert!
				Clients of the alternative master-slave cluster cannot use Fabric discovery to connect to the cluster. This makes the client configuration slightly less flexible, because you cannot abstract away the broker locations. In this scenario, it is necessary to list the host locations explicitly in the client connection URL.
			
				For example, to connect to a shared file system master-slave cluster that consists of three brokers, you could use a connection URL like the following:
			
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)