A key step for setting up a reliable Liferay environment is to configure Liferay clustering. Liferay clustering can be achieved in multiple ways. Some of the supported clustering methods are UDP Multicast, UDP Unicast and TCP Unicast. While Multicast is recommended for better clustering performance, in most enterprise environments and in all Cloud environments such as AWS and Microsoft Azure, UDP Multicast is disabled. As a general practice, if UDP Multicast is not available or disabled in the infrastructure, Xtivia recommends TCP Unicast for clustering Liferay. TCP Unicast is typically more reliable than UDP Unicast for Liferay clustering.

This post covers the steps required to setup TCP Unicast clustering in Liferay 6.2 EE environments. The same instructions apply to Enterprise infrastructure, Azure and AWS platforms. Also, throughout this blog post common locations have been abstracted out as variable names; this is to maintain a sense of brevity throughout the document.  These variables do not have any meaning outside of the document itself.  Example variables include: $host1_FQDN, $name_of_the_s3bucket, $AWS_access_key, $AWS_secret_key etc.,

Configure TCP Unicast clustering for Liferay

To achieve TCP Unicast clustering for a Liferay environment, the following steps need to be performed in all the Liferay instances that belong to a cluster.

Step 1. Ensure that ports are open

For Liferay clustering to be successful, the hosts need to be able to communicate with each other to send clustering packets across the network. For our example, we will use port 7800 for TCP Unicast clustering in Liferay. In this post we have provided an example configuration to use TCP Ping and AWS S3 ping for Liferay clustering. Other options to configure Liferay clustering with TCP Unicast are File ping and JDBC ping.

Step 2. Configure TCP Unicast

2.1 For non-AWS environment

For the next step, an XML file containing the TCP Unicast configuration should be configured. Please note that we recommend the file be copied to the global library for the AppServer so that the file can be read by the Liferay application on startup.

<!--
TCP based stack, with flow control and message bundling. This is usually used when IP
multicasting cannot be used in a network, e.g. because it is disabled (routers discard
multicast). Note that TCP.bind_addr and TCPPING.initial_hosts should be set, possibly
via system properties, e.g. -Djgroups.bind_addr=192.168.5.2 and
-Djgroups.tcpping.initial_hosts=192.168.5.2[7800]
author: Bela Ban
-->
<config xmlns="urn:org:jgroups"
		xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
		xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
	<TCP bind_port="7800"
			singleton_name="Liferay"
			loopback="false"
			recv_buf_size="${tcp.recv_buf_size:5M}"
			send_buf_size="${tcp.send_buf_size:640K}"
			max_bundle_size="64K"
			max_bundle_timeout="30"
			enable_bundling="true"
			use_send_queues="true"
			sock_conn_timeout="300" 
                        timer_type="old"
			timer.min_threads="4"
			timer.max_threads="10"
			timer.keep_alive_time="3000"
			timer.queue_max_size="500"
			thread_pool.enabled="true"
			thread_pool.min_threads="1"
			thread_pool.max_threads="10"
			thread_pool.keep_alive_time="5000"
			thread_pool.queue_enabled="false"
			thread_pool.queue_max_size="100"
			thread_pool.rejection_policy="discard" 
                        oob_thread_pool.enabled="true"
			oob_thread_pool.min_threads="1"
			oob_thread_pool.max_threads="8"
			oob_thread_pool.keep_alive_time="5000"
			oob_thread_pool.queue_enabled="false"
			oob_thread_pool.queue_max_size="100"
			oob_thread_pool.rejection_policy="discard"/>
	<TCPPING timeout="3000"
		initial_hosts=
                        "$host1_FQDN[7800],$host2_FQDN[7800],$host3_FQDN[7800],$host4_FQDN[7800]"
			port_range="1"
			num_initial_members="10"/>
	<MERGE2 min_interval="10000"
			max_interval="30000"/>
	<FD_SOCK/>
	<FD timeout="3000" max_tries="3" />
	<VERIFY_SUSPECT timeout="1500" />
	<BARRIER />
	<pbcast.NAKACK2 use_mcast_xmit="false"
			discard_delivered_msgs="true"/>
	<UNICAST />
	<pbcast.STABLE stability_delay="1000" 
                        desired_avg_gossip="50000"
			max_bytes="4M"/>
	<pbcast.GMS print_local_addr="true" 
                        join_timeout="3000" 
                        view_bundling="true"/>
	<UFC max_credits="2M"
			min_threshold="0.4"/>
	<MFC max_credits="2M"
			min_threshold="0.4"/>
	<FRAG2 frag_size="60K" />
	<!--RSVP resend_interval="2000" 
                        timeout="10000"/-->
	<pbcast.STATE_TRANSFER/>
</config>

2.2 For AWS environment

For Liferay deployed on AWS,  we would recommend the following configuration that uses AWS S3 ping for clustering. Please note that we recommend the file be copied to the global library for the AppServer so that the configuration file can be read by Liferay application on startup. Please note that before configuring, an S3 bucket that can be access by all Liferay instances needs to be created before configuring TCP Unicast clustering for Liferay instances.

<!--
TCP based stack, with flow control and message bundling. This is usually used when IP
multicasting cannot be used in a network, e.g. because it is disabled (routers discard
multicast).Note that TCP.bind_addr and TCPPING.initial_hosts should be set, possibly
via system properties, e.g.-Djgroups.bind_addr=192.168.5.2 and
-Djgroups.tcpping.initial_hosts=192.168.5.2[7800]
author: Bela Ban
-->
<config xmlns="urn:org:jgroups"
		xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
		xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-2.12.xsd">
	<TCP bind_port="7800"
			singleton_name="LIFERAY_CLUSTER"
			loopback="true"
			recv_buf_size="${tcp.recv_buf_size:20M}"
			send_buf_size="${tcp.send_buf_size:640K}"
			discard_incompatible_packets="true"
			max_bundle_size="64K"
			max_bundle_timeout="30"
			enable_bundling="true"
			use_send_queues="true"
			sock_conn_timeout="300"
			timer_type="new"
			timer.min_threads="4"
			timer.max_threads="10"
			timer.keep_alive_time="3000"
			timer.queue_max_size="500"
			thread_pool.enabled="true"
			thread_pool.min_threads="1"
			thread_pool.max_threads="10"
			thread_pool.keep_alive_time="5000"
			thread_pool.queue_enabled="false"
			thread_pool.queue_max_size="100"
			thread_pool.rejection_policy="discard"
			oob_thread_pool.enabled="true"
			oob_thread_pool.min_threads="1"
			oob_thread_pool.max_threads="8"
			oob_thread_pool.keep_alive_time="5000"
			oob_thread_pool.queue_enabled="false"
			oob_thread_pool.queue_max_size="100"
			oob_thread_pool.rejection_policy="discard"/>
	<S3_PING location="$name_of_the_s3bucket" 
			access_key="$AWS_access_key"
			secret_access_key="$AWS_secret_key" 
			timeout="2000"
			num_initial_members="2"/>
	<MERGE2 min_interval="10000"
			max_interval="30000"/>
	<FD_SOCK/>
	<FD timeout="3000" max_tries="3" />
	<VERIFY_SUSPECT timeout="1500" />
	<BARRIER />
	<pbcast.NAKACK2
			use_mcast_xmit="false"
			discard_delivered_msgs="true"/>
	<UNICAST timeout="300,600,1200" />
	<pbcast.STABLE stability_delay="1000" 
			desired_avg_gossip="50000"
			max_bytes="4M"/>
	<pbcast.GMS print_local_addr="true" 
			join_timeout="3000"
			view_bundling="true"/>
	<UFC max_credits="2M"
			min_threshold="0.4"/>
	<MFC max_credits="2M"
			min_threshold="0.4"/>
	<FRAG2 frag_size="60K" />
</config>

Step 3. Add properties to portal-ext.properties

For Liferay to use TCP Unicast clustering, the following properties need to be included in portal-ext.properties. Please note that we recommend setting the cluster link auto detect address to the database host and port. The assumption we make here is that the database host is always available and can be used by Liferay instances to accurately determine the network interface to use for clustering purposes.

##
## Cluster Link
##
#
# Set this to true to enable the cluster link. This is required if you want
# to cluster indexing and other features that depend the cluster link.
#
cluster.link.enabled=true
#cluster link channel properties
cluster.link.channel.properties.control=$name_of_the_tcp_unicast.xml
cluster.link.channel.properties.transport.0=$name_of_the_tcp_unicast.xml
cluster.link.autodetect.address=$database_host:$port
ehcache.cluster.link.replication.enabled=true

Step 4: Deploy ehcache-cluster-web application

As a final step for configuring Liferay clustering, the Ehcache Cluster EE application from Liferay Marketplace needs to be deployed to all Liferay instances. Currently this can be found at https://www.liferay.com/marketplace/-/mp/application/15099166.

Step 5: Restart Liferay and verify

To verify that clustering is working as expected, you can take a look at the the AppServer logs.  The logs should contain the lines similar to the following showing successful cluster initialization:

INFO [localhost-startStop-1][ClusterBase:142] Autodetecting JGroups outgoing IP address
and interface for $database_host:port
INFO [localhost-startStop-1][ClusterBase:158] Setting JGroups outgoing IP address to
172.31.42.34 and interface to eth0
-------------------------------------------------------------------
GMS: address=ip-172-31-42-34-20199, cluster=LIFERAY-CONTROL-CHANNEL, physical
address=172.31.42.34:7800
-------------------------------------------------------------------
-------------------------------------------------------------------
GMS: address=ip-172-31-42-34-39646, cluster=LIFERAY-TRANSPORT-CHANNEL-0, physical
address=172.31.42.34:7800
-------------------------------------------------------------------

and lines similar to the following showing successful connectivity between the cluster members:

INFO  [localhost-startStop-1][BaseReceiver:64] Accepted view [ip-172-31-35-224-26939|11]
[ip-172-31-35-224-26939, ip-172-31-42-34-39646]

Alternatively, you could make changes to webcontent article or pages on one instance of the cluster and verify if the changes are replicated on the other members of the cluster.

Share This