Liferay Cache Replication With JGroups
Liferay Cache Replication with JGroups tutorial will give you details on “How to setup cache replication with JGroups”
What is Jgroups?
JGroups is powerful java networking framework which has robust protocol stack, reliable unicast and multicast message transmission. In this tutorial we will look into Liferay uni cast and multi cast clustering configuration with JGroups.
- JGroups cache replication setup requires JGroupsCacheManagerPeerProviderFactory as a Peer Provider Factory, which acts as Global Cache Manager
- JGroupsCacheReplicatorFactory as acts cache event Listener
- JGroup’s TCP configuration will go in tcp.xml and multi cast configuration in udp.xml files.
Liferay Unicast Cache Replication with JGroups:
This section will explain about Liferay uni cast clustering configuration with TCPPING distribution protocol.
Step1: Configure Tomcat Setenv.sh
- open setenv.sh in tomcat/bin folder
- bind_address and initial host configuration on each nodes.
- Add list of all cluster nodes in Djgroups.tcpping.initial_hosts attribute to define list servers in cluster network.
- bind_addr attribute holds the current server node IP
- Add below to CATALINA_OPTS in setenv.sh :
-
-Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=10.10.1.205 -Djgroups.tcpping.initial_hosts=10.0.1.205[7800],10.0.1.206[7800],10.0.1.207[7800]
- Finally CATALINA_OPS in setenv.sh file will be like:
-
CATALINA_OPTS="$CATALINA_OPTS -Dfile.encoding=UTF8 -Djava.net.preferIPv4Stack=true -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=GMT -XX:NewSize=1024m -XX:MaxNewSize=1024m -Xms4096m -Xmx4096m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:NewRatio=2 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 - XX:MaxTenuringThreshold=0 -XX:+UseParNewGC -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark -XX:+CMSConcurrentMTEnabled -XX:ParallelCMSThreads=2 -XX:+UseLargePages -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:+UseBiasedLocking -XX:+BindGCTaskThreadsToCPUs -XX:+UseFastAccessorMethods -XX:+CMSClassUnloadingEnabled -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=10.10.1.205 -Dcom.sun.management.jmxremote.port=18080 -Djava.library.path=/usr/local/apr/lib -Djava.security.manager -Djava.security.policy=$CATALINA_HOME/conf/catalina.policy -Djgroups.bind_addr=10.10.1.205 -Djgroups.tcpping.initial_hosts=10.0.1.205[7800],10.0.1.206[7800],10.0.1.207[7800]"
Step2: Liferay EHCache Configuration
- Liferay is configured with EHcache by default at below configuration path. You can find this in EhcachePortalCacheManager class.
- We may required to override these files to fine tune Entities in cluster.
-
private static final String _DEFAULT_CLUSTERED_EHCACHE_CONFIG_FILE = "/ehcache/liferay-multi-vm-clustered.xml";
-
- Create myehcache folder in /tomcat/webapps/ROOT/WEB-INF/classes
- copy hibernate-clustered.xml and liferay-multi-vm-clustered.xml files into myehcache folder.
- Extract jgroups.jar file, that can be found in tomcat/webapps/ROOT/WEB-INF/lib folder and copy the tcp.xml file also into myehcache folder.
- Edit tcp.xml file and replace TCPPING section with below content and set bind_port must to 7800.
- <TCPPING timeout=”3000″ initial_hosts=”${jgroups.tcpping.initial_hosts:10.0.2.7[7800],10.0.2.8[7800]}” port_range=”2″ num_initial_members=”10″/>
click here to download entire ehcache folder
Step3: Update portal-ext.properties file
- <TCPPING timeout=”3000″ initial_hosts=”${jgroups.tcpping.initial_hosts:10.0.2.7[7800],10.0.2.8[7800]}” port_range=”2″ num_initial_members=”10″/>
### Liferay Uni Cast Cluster Config Properties #### web.server.display.node=true lucene.replicate.write=true net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory net.sf.ehcache.configurationResourceName.peerProviderProperties=file=myehcache/tcp.xml ehcache.multi.vm.config.location.peerProviderProperties=file=myehcache/tcp.xml
Step4: Delete QUARTZ tables in MySQL:
Now delete table which start with QUARTZ and we are done with Liferay cluster cache replication with uni cast and ensure that 7800 port is opened in all servers.
Liferay Multicast Cache Replication with JGroups:
Note: Make sure that cluster.link.enabled property must be set to false. We can also do with cluster link configuration where tcp.xml/udp.xml files are not required. UDP/TCP configuration can be set through portal-ext.properties in peerProviderProperty attribute.
Step1: Update setenv.sh
- Set the liferay nodes to use IPV4 configuration and add below configuration to CATALINA_OPTS in setenv.sh file
- -Djava.net.preferIPv4Stack=true
Step2: EhCache Configuration:
- Create myehcache folder in /tomcat/webapps/ROOT/WEB-INF/classes folder and copy hibernate-clustered.xml and liferay-multi-vm-clustered.xml files into myehcache folder. Download ehcacache folder here.
- Extract jgroups.jar file and copy the udp.xml file into myehcache folder.
- Add mcast_addr attribute to UDP section and copy udp.xml file also into myehcache folder
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd"> <UDP mcast_addr="${jgroups.udp.mcast_addr:228.10.10.10}" mcast_port="${jgroups.udp.mcast_port:45588}" tos="8" ucast_recv_buf_size="5M" ucast_send_buf_size="640K" mcast_recv_buf_size="5M" mcast_send_buf_size="640K" loopback="true" max_bundle_size="64K" max_bundle_timeout="30" ip_ttl="${jgroups.udp.ip_ttl:8}" enable_bundling="true" enable_diagnostics="true" thread_naming_pattern="cl" timer_type="old" timer.min_threads="4" timer.max_threads="10" timer.keep_alive_time="3000" timer.queue_max_size="500" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="true" thread_pool.queue_max_size="10000" thread_pool.rejection_policy="discard" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="discard"/> <PING timeout="2000" num_initial_members="20"/> <MERGE2 max_interval="30000" min_interval="10000"/> <FD_SOCK/> <FD_ALL/> <VERIFY_SUSPECT timeout="1500" /> <BARRIER /> <pbcast.NAKACK2 xmit_interval="1000" xmit_table_num_rows="100" xmit_table_msgs_per_row="2000" xmit_table_max_compaction_time="30000" max_msg_batch_size="500" use_mcast_xmit="false" discard_delivered_msgs="true"/> <UNICAST xmit_interval="2000" xmit_table_num_rows="100" xmit_table_msgs_per_row="2000" xmit_table_max_compaction_time="60000" conn_expiry_timeout="60000" max_msg_batch_size="500"/> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="4M"/> <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/> <UFC max_credits="2M" min_threshold="0.4"/> <MFC max_credits="2M" min_threshold="0.4"/> <FRAG2 frag_size="60K" /> <RSVP resend_interval="2000" timeout="10000"/> <pbcast.STATE_TRANSFER /> <!-- pbcast.FLUSH /--> </config>
-
Update portal-ext.properties file
### Liferay Multi Cast Cluster Config Properties #### web.server.display.node=true lucene.replicate.write=true net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory net.sf.ehcache.configurationResourceName.peerProviderProperties=file=myehcache/udp.xml ehcache.multi.vm.config.location.peerProviderProperties=file=myehcache/udp.xml
Now Delete tables, which are with prefix “Quartz” and start all servers. All tomcat servers are configured with Multicast with jGroups
Liferay Cache Replication With JGroups
Liferay Cache Replication with JGroups tutorial will give you details on “How to setup cache replication with JGroups” READ MORE
Liferay Cache Replication with RMI
In this tutorial, we will configure Liferay clustering with RMI replication technique. RMI is default EHCache replication implementation method and uses over TCP protocol. READ MORE
4 thoughts on “Liferay Cache Replication With JGroups”
Leave a Reply
You must be logged in to post a comment.
Hello! I use your tutorial. I writed the settings as you wrote. I have a 4 node server tomcat. Error such as falls
04:26:56,634 ERROR [UDP:1444] failed handling incoming message
java.lang.NoClassDefFoundError: org/jgroups/protocols/pbcast/NAKACK$StatsEntry
:15:10,056 ERROR [JGroupsCacheReceiver:109] Failed to handle message JGroupEventMessage [event=REMOVE_ALL, cacheName=com.liferay.portal.freemarker.LiferayCacheStorage, serializableKey=null, element=null]
java.lang.IllegalStateException: The com.liferay.portal.freemarker.LiferayCacheStorage Cache is not alive.
04:17:49,999 ERROR [UDP:1444] failed handling incoming message
java.lang.NoClassDefFoundError: org/jgroups/protocols/pbcast/STABLE$StabilitySendTask.
You have not updated jgoups.jar? Can you know why this error occurs?
Please provide your configuration and Id didn’t used liferay jgroups.jar file.