We recently came across an interesting issue in Liferay. One of our clients was trying to cluster Liferay instances using the ehcache-cluster-web (Ehcache Cluster EE.lpkg) application downloaded from Liferay Marketplace.

Liferay instances were running fine with no issues. However, on deploying the ehcahe-cluster-web application and starting it up, the Liferay environment stopped responding, with log messages showing an “Unable to create scheduled task” exception. Because of our extensive experience with Liferay, we knew that task scheduler exceptions in most cases meant Liferay or one of the applications deployed on Liferay couldn’t perform a database insert or update.

We proceeded to enable hibernate logs by adding hibernate.show_sql=true to portal-ext.properties.

Adding this property and restarting Liferay gave us some insight. We could see that the insert was failing on inserts into lock_ table and there were right truncation errors in the logs. Initial breakthrough!

As a next step, we stopped the Liferay instance and proceeded to increase the length of columns on the lock_ table specifically the owner column in the lock_ table in the Liferay database. From our tests in all the environments we setup locally on different AppServers, we found that there was a base-64 encoded value inserted into the owner column of the lock_ table.

We increased the column size of the owner column in the lock_ table from varchar(255) to varchar(400). Once the changes were made to the database table, we restarted the Liferay instances. This time, there were no exceptions in the logs and we could see a row inserted into the lock_ table. The value inserted into the owner column in the lock_ table was 353 characters long! Progress!

The value in the owner column looked something like:

IMKsw61zciZjb20ubGlmZXJheS5wb3J0YWwuY2x1c3Rlci5BZGRyZXNzSW1wbG7FocKxw5XDjcOMw7hbMV1
 MIF9hZGRyZXNzdCBMb3JnL2pncm91cHMvQWRkcmVzczt3MS9saWZlcmF5LXBvcnRhbC02LjItZWUtc3A0LT
 IwMTQwNTA5MTA0ODE2MjYzLndhcnhwc3Igb3JnLmpncm91cHMudXRpbC5VVUlEwqoNCjt1w6B+wqQNCncxL
 2xpZmVyYXktcG9ydGFsLTYuMi1lZS1zcDQtMjAxNDA1MDkxMDQ4MTYyNjMud2FyeHB3wqTDjiHDplLCrELC
 oT5lw67Cv34l4oCgeA==

We then decoded the field from base-64 to plain text to see what the value was and this was the result:

¬ísr&com.liferay.portal.cluster.AddressImplnš±ÕÍÌø[1]L _addresst Lorg/jgroups/Address
 ;w1/liferay-portal-6.2-ee-sp4-20140509104816263.warxpsr org.jgroups.util.UUIDª;uà~¤w1
 /liferay-portal-6.2-ee-sp4-20140509104816263.warxpw¤Î!æR¬B¡>eî¿~%†x

This value showed us that ehcache-cluster-web application was trying to add the name of the war file (the file used to deploy the application by Liferay), to the owner column in the lock_ table. This cause the value to be too long for the field, causing the error.

To fix the issue, we undeployed the Liferay webapp, stopped the Liferay instance, reverted back the database change, changed the name of Liferay webapp to Liferay.war, cleaned the WebSphere temp folder, and deployed the webapp again. This time, it worked as expected.

In conclusion, while Liferay works really well with WebSphere AppServer, there are a few items that will vary from one environment to another during WAS setup. The length of a Liferay webapp name is one of them. As we thoroughly work on different scenarios and uncover limitations with Liferay setup on WAS, we will post more updates. We wanted to find the maximum length of Liferay webapp filename that could be deployed in the WebSphere environment. In our tests, the maximum length of the war file name for a Liferay application couldn’t exceed 9 characters.

Share This