Liferay Unicast session replication

This tutorial will drive you on configuring uni-cast session replication on liferay tomcat node. In AWS or cloud Environments, multicast clustering will not work since broad casting of packets are blocked in cloud.

Multicast

  • Multi cast clustering is used in LAN network
  • IP addresses are  ranging from 224.0.0.0 – 239.255.255.255.
  • All tomcat servers are pointed to  single multicast IP address multicast cluster configuration.
  • In two node clusters, both are servers are pointed to IP address “228.0.04”. In this case, all serialized  session objects are replicated on both server by distributing data  from one server to  set of other servers.In AWS Cloud,  This is not supported since it  blocks the multi cast packet transmission.

Unicast

  • Unicast clustering will be used in WAN networks over the TCP protocol.
  • Unicast uses point to point communication and will be useful in two node cluster.
  • Unicast clustering is not recommended for two or more nodes

Unicast  Session Replication Steps in Tomcat:

  • Let configure unicast session replication in two tomcat nodes.
  • In both tomcat server, open web.xml and add <distributable/> tag above </webapp> in your tomcat/webapps/ROOT/WEB-INF/web.xml file  and in tomcat/conf/web.xml also
  • We used  4444 in this tutorial and  make sure that port is enabled on both servers
  • Edit tomact/conf/server.xml file  and go to  <Cluster className=”org.apache.catalina.ha.tcp.SimpleTcpCluster”/>  tag .
  • In node1, copy the below cluster configuration in between Engine tag and replace {node1-ip},{node2-ip} with machine IP addresses.
  • Receiver Server  should be configured with local IP address and Sender will be configured with  node2 IP.
  • Lets say, we are configuring for below server IP’s and SimpleTCPCluster need to update on both nodes with below XML configuration
    • Node1:    10.0.1.6
    • Node2:     10.0.1.7
    • <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
       channelSendOptions="6" channelStartOptions="3">
       <Manager className="org.apache.catalina.ha.session.DeltaManager"  expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />
       <Channel className="org.apache.catalina.tribes.group.GroupChannel">
       <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"  autoBind="0" 
       selectorTimeout="5000" maxThreads="6" address="10.0.1.6" port="4444" />
       <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
       <Transport  className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
       timeout="60000" keepAliveTime="10" keepAliveCount="0" />
       </Sender>
       <Interceptor  className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"  staticOnly="true" />
       <Interceptor  className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector" />
       <Interceptor  className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
       <Member className="org.apache.catalina.tribes.membership.StaticMember"
       host="10.0.1.7" port="4444" uniqueId="{1,3,5,7,8,0,0,2,0,0,1,0,0,0,0,9}" />
       </Interceptor>
       </Channel>
       <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"  filter="" />
       <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
       <ClusterListener  className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener" />
       <ClusterListener  className="org.apache.catalina.ha.session.ClusterSessionListener" />
       </Cluster>
    • In Node2, copy the below cluster config and replace receiver IP with local and Sender with Node1 IP
    • Node2 – server.xml config:
    • <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"  channelSendOptions="6" channelStartOptions="3">
       <Manager className="org.apache.catalina.ha.session.DeltaManager"
       expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />
       <Channel className="org.apache.catalina.tribes.group.GroupChannel">
       <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
       autoBind="0" selectorTimeout="5000" maxThreads="6" address="10.0.1.7"  port="4444" />
       <Sender  className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
       <Transport  className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
       timeout="60000" keepAliveTime="10" keepAliveCount="0" />
       </Sender>
       <Interceptor  className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"
       staticOnly="true" />
       <Interceptor  className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector" />
       <Interceptor  className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
       <Member className="org.apache.catalina.tribes.membership.StaticMember"
       host="10.0.1.6" port="4444" uniqueId="{1,3,5,7,8,0,0,2,0,0,1,0,0,0,0,9}" />
       </Interceptor>
       </Channel>
       <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"  filter="" />
       <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
       <ClusterListener  className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener" />
       <ClusterListener  className="org.apache.catalina.ha.session.ClusterSessionListener" />
       </Cluster>
  • Restart the both servers and you can see session logs in the tomcat
  • Test the session replication by shutting down node1 and all requestes will be routed to node2. Hope this helps

5 Comments

  1. Pingback: Liferay Clustering

  2. Anon

    Your ” has been replaced with \u201 and \u2033 in above xml

    Reply
    1. Jayaram Pokuri (Post author)

      Yeah. updated with proper config and below is sample config:

















      Reply
  3. Neil

    Should the unique ID be the same for both nodes or do they have to be unique?

    Reply
    1. Sida

      UNIQUEID MUST BE DIFFERENT ON EACH TOMCAT CONFIGURATION

      Reply

Leave a Comment

Your email address will not be published. Required fields are marked *