Xtivia http://www.xtivia.com Your Trusted Technology Partner Fri, 27 May 2016 03:07:32 +0000 en-US hourly 1 Rock-Solid Liferay Plugin Deployments http://www.xtivia.com/rock-solid-liferay-plugin-deployments/ http://www.xtivia.com/rock-solid-liferay-plugin-deployments/#comments Fri, 27 May 2016 03:07:32 +0000 http://www.xtivia.com/?p=14420 This blog post describes the overall process recommended by Xtivia for deploying custom Liferay plugin applications to Liferay.  Note that while this document is primarily targeted at the deployment of custom...

READ MORE

The post Rock-Solid Liferay Plugin Deployments appeared first on Xtivia.

]]>
This blog post describes the overall process recommended by Xtivia for deploying custom Liferay plugin applications to Liferay.  Note that while this document is primarily targeted at the deployment of custom Liferay plugins, many of the concepts do apply to non-Liferay web applications being deployed to the Tomcat application server.

Hot deployments in Liferay bundles

Liferay bundles normally have Apache Tomcat’s hot deployment feature enabled; while hot deployment is a convenient process that allows the deployed artifacts to be available immediately, there are a number of problems that arise when Tomcat hot deployment is enabled which will cause the system to become unstable over time.  As a result, Xtivia highly recommends disabling hot deployment at the application server level. Hot deployment is not recommended to be enabled on any environment except for lightweight testing purposes on local developer systems.

Problem Definition

The basic issues presented by the use of hot deployment on Apache Tomcat include the following:

  • Ongoing leaks in the JVM’s permanent generation memory space. Enabling hot deployments on a Tomcat application server instance will cause the amount of memory consumed by the permanent generation to increase over time, eventually resulting in an outage.  This is caused by the way that Tomcat handles web application deployments; the only remediation step is to restart the application server instance.
  • Tomcat has also been observed to have problems cleaning up the global classloader during deployments, which can lead to class loader errors within the JVM, again causing an outage.
  • The JVM will at times run into conflicts with the operating system when attempting to update filesystem content during a deployment; this can result in corrupted application deployments, as in-memory content fails to overwrite files which are marked as locked on the filesystem.  These issues are extremely difficult to detect or triage.
  • The preprocessing that Liferay does to any plugin is recommended to be executed at deploy time in order to minimize build overhead and avoid potential mismatches between the the target Liferay installation and the custom Liferay plugin.

Approach

The overall approach taken for persistent multi-user Apache Tomcat installations is to disable hot deployment at the application server level and include an application server restart in the application deployment process.  The specifics of implementation may vary, depending on the delivery toolchain in place, but the overall process is as follows:

  1. Disable Apache Tomcat’s hot deployment processor via modifications to the Tomcat configuration files.
  2. Modify Liferay’s deployment process to force it to deploy an atomic WAR file, rather than an exploded application tree.
  3. For each deployment, remove the target Tomcat instance from circulation during the deployment process.
  4. During a deployment execution, clear the directories that Tomcat uses to store copies of the applications at runtime.
  5. Execute validation of the deployment process prior to placing the Tomcat instance back in circulation.

Technical Details

Apache Tomcat Configuration

To turn off auto deployment in Tomcat, a configuration change needs to be made to Tomcat’s ${CATALINA_BASE}/conf/server.xml file.  An attribute named autoDeploy with a value of “false” needs to be added to the Host Entity nested within the Engine defined within the server.xml file. An example follows:

<Service name="Catalina">
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" />
    <Engine name="Catalina" defaultHost="localhost">
        <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="false">
        </Host>
    </Engine>
</Service>

Liferay Configuration

To prevent the Liferay deployment process from automatically expanding deployed plugins into the ${CATALINA_BASE}/webapps directory, the following configuration needs to be added to the Liferay portal-ext.properties file for the instance.

auto.deploy.unpack.war=false

The purpose of this change is to provide some differentiation between applications that have only been processed by Liferay and applications which have been fully deployed by an application server restart.

Standard Plugin Deployment Process

Before any deployments are performed on a Liferay instance, that instance should be taken out of circulation at the load balancer or web server level. This prevents traffic from reaching the instance on which deployments are being performed, eliminating the risk that end user requests will inadvertently be routed to the target server during the deployment process. This has the secondary benefit of providing a safe window after the deployment has been done during which deployment validation can occur.  Details on how to remove an individual Tomcat instance from circulation depend heavily on the load balancer or web server in use, and are beyond the scope of this document.

Once the application server has stopped receiving inbound traffic from the load balancers/web servers, the following steps should be executed:

  1. Move the target web application WAR file from the Tomcat ${CATALINA_BASE}/webapps directory to a temporary backup location.  If any step of the deployment process fails, restore to the previous state by copying this backed up WAR file back into the ${CATALINA_BASE}/webapps directory.
  2. Delete the directory containing the expanded version of the target application from the ${CATALINA_BASE}/webapps directory.
  3. Copy the plugin WAR that you want to be deployed to the Liferay deploy directory, located in ${liferay.home}/deploy.
  4. Verify that the operating system user that owns the Liferay process has full write permissions on the copied artifact.
  5. Wait for the new version of the application WAR to be available in the ${CATALINA_BASE}/webapps directory. Typically this is denoted by Liferay in the catalina.out and liferay-*.log files with the following message:
Deployment will start in a few seconds

At this point the Liferay plugin deployment processing activity is complete.

  1. Stop the Liferay Apache Tomcat application server process.
  2. Delete the contents of the ${CATALINA_BASE}/temp and ${CATALINA_BASE}/work directories.
  3. Start the Liferay Apache Tomcat application server process.
  4. Perform functional validation testing for each of the deployed applications.
  5. Remove the original target web application from the temporary backup location.

Once this process is complete, the Apache Tomcat server can be placed back into circulation at the load balancer or web server level.

Liferay EXT Plugin Deployment

An edge case for the Liferay plugin deployment process is present for Liferay EXT plugins.  These plugins actually modify the installed instance of the Liferay application itself, and they often include JAR files which need to be included in the application server’s global classpath; to accommodate these additional requirements, we recommend using a process similar to the one used to apply and deploy Liferay application patches/hotfixes.  A sample process for deploying EXT plugins is as follows:

On a clean bundle matching the version of Liferay that you intend to deploy, do the following:

  1. Unzip the target bundle into a temporary location.
  2. Apply all necessary hotfixes & patches for the target environment to the bundle’s Liferay instance.
  3. Start the bundle using the Tomcat startup.sh or startup.bat script.
  4. Deploy the EXT plugin to the temporary bundle by dropping it into the bundle’s deploy directory.
  5. Wait for the EXT plugin to be deployed by the temporary Liferay instance.
  6. Shut down and restart the temporary Liferay instance.
  7. Shut down the temporary Liferay instance.
  8. Bundle the contents of the ROOT application in the temporary Liferay Tomcat instance’s webapps directory into a ROOT.war file.  
  9. Copy the ROOT.war file along with all EXT-generated JAR files from the temporary Liferay Tomcat instance’s lib/ext directory into a central location for deployment.
  10. Remove the temporary Liferay instance.

On each of the application servers targeted for deployment, execute the following:

  1. As for standard plugins, first remove the target Tomcat instance from the load balancer or web server.  
  2. Stop the Liferay Apache Tomcat application server process.
  3. Move the following files from the Tomcat ${CATALINA_BASE}/webapps directory to a temporary backup location.  If any step of the deployment process fails, restore to the previous state by copying these backed up files back into the ${CATALINA_BASE}/webapps directory.
    1. ROOT.war
    2. The WAR file for the target application
  4. Move any JAR files which will be added by this EXT plugin from the Tomcat ${CATALINA_BASE}/lib directory to a temporary backup location.  If any step of the deployment process fails, restore to the previous state by copying these files back into the source directory.
  5. Delete the ROOT directory containing the expanded version of the Liferay Portal application from the ${CATALINA_BASE}/webapps directory.
  6. Delete the contents of the ${CATALINA_BASE}/temp and ${CATALINA_BASE}/work directories.
  7. Deploy the Liferay Portal application ROOT.war file to the target application server’s {CATALINA_BASE}/webapps directory.
  8. Deploy all JAR files gathered from the temporary Liferay bundle’s lib/ext directory to the target application server’s ${CATALINA_BASE}/lib directory.
  9. Start the Liferay Apache Tomcat application server process.
  10. Perform functional validation testing for each of the deployed applications.
  11. Remove the original ROOT.war and JAR files from the temporary backup location.
  12. Place the target server back into circulation at the load balancer or web server level.

The post Rock-Solid Liferay Plugin Deployments appeared first on Xtivia.

]]>
http://www.xtivia.com/rock-solid-liferay-plugin-deployments/feed/ 0
The Business Case for Managing Liferay with Chef http://www.xtivia.com/business-case-for-managing-liferay-with-chef/ http://www.xtivia.com/business-case-for-managing-liferay-with-chef/#comments Fri, 27 May 2016 02:48:15 +0000 http://www.xtivia.com/?p=14380 Chef is a powerful automation platform that transforms complex infrastructure into code, bringing your servers and services to life. Whether you’re operating in the cloud, on-premises, or a hybrid, Chef...

READ MORE

The post The Business Case for Managing Liferay with Chef appeared first on Xtivia.

]]>
Chef is a powerful automation platform that transforms complex infrastructure into code, bringing your servers and services to life. Whether you’re operating in the cloud, on-premises, or a hybrid, Chef automates how applications are configured, deployed, and managed across your network, no matter its size. Chef is built around simple concepts: achieving desired state, centralized modeling of IT infrastructure, and resource primitives that serve as building blocks. These concepts enable you to quickly manage any infrastructure with Chef. These very same concepts allow Chef to handle the most difficult infrastructure challenges on the planet. Anything that can run the chef-client can be managed by Chef.
More information can be found at the following website: https://docs.chef.io/chef_overview.html

Liferay’s out-of-the-box packaging provides a set of ready-made “bundles” which include a functional application server configured to work with the Liferay application.  While these “bundles” do make it convenient for an individual to quickly start up a local Liferay instance for experimentation, they make assumptions that directly conflict with long-terms maintainability and scalability.  As a result, Xtivia has devised an alternative deployment structure for all Liferay installations; this deployment structure is designed to provide the following benefits:

  1. Scalability for local application server instances
  2. Isolation of the Liferay application server from the main operating system
  3. Standardization of application server instance layout, to aid in automation and maintenance

 

Xtivia has created a set of Chef cookbooks which contain recipes that help with installing, configuring, and managing Liferay installations of any size. A few example scenarios and advantages provided by the Chef recipes are as follows:

  1. The Chef recipes created by Xtivia provide flexibility and functionality to configured the right kind of database, use the correct driver etc.,
  2. Chef cookbooks created by Xtivia also provide functionality to setup a clustered Liferay environment with a simple Chef  recipe.
  3. The Chef recipes created by Xtivia are managed using Berkshelf, and are fully compatible with AWS OpsWorks.
  4. You could crate a full stack Liferay environment using the Chef recipes created by Xtivia in a matter of minutes.
  5. Xtivia recipes can also be used to upgrade Liferay service packs.

 

In a later blog entry, we will discuss some of the various challenges of using Chef to install, configure, and manage Liferay and applications similar to Liferay. If you are interested in learning more about the details of using Chef (or any other configuration management tool) to automate the management of your Liferay environments, reach out to us today!

The post The Business Case for Managing Liferay with Chef appeared first on Xtivia.

]]>
http://www.xtivia.com/business-case-for-managing-liferay-with-chef/feed/ 0
How to use Liferay Audience Targeting to Control Navigation Elements http://www.xtivia.com/how-to-use-liferay-audience-targeting-to-control-navigation/ http://www.xtivia.com/how-to-use-liferay-audience-targeting-to-control-navigation/#comments Wed, 25 May 2016 17:43:36 +0000 http://www.xtivia.com/?p=14306 Liferay’s Audience Targeting application raises the engagement experience of your portal to a whole new level. This app allows you to segment your audience, target specific content to different user...

READ MORE

The post How to use Liferay Audience Targeting to Control Navigation Elements appeared first on Xtivia.

]]>
Liferay’s Audience Targeting application raises the engagement experience of your portal to a whole new level. This app allows you to segment your audience, target specific content to different user segments, and create campaigns to target content to user segments. It also allows you to track user actions and generate reports that provide insight into the effectiveness of your campaigns.

As an example, in an intranet scenario for a global multinational company, you might use audience targeting to segment users by location and target content based on these location-based segments. This enables your users to only see content relevant to them and reduces the noise.

In this post I will walk you through a tutorial on how to show/hide navigation pages based on user segments. The notion is that we will defer to Liferay’s Role Based Access Control (RBAC) to filter the navigation based on the security set-up, and then we will apply another layer of filtering to filter out any pages that are not relevant to the currently logged-in user because of the user segments that he/she is not a member of.

The 3 cases I need to consider when displaying a page are –

  • Show pages that have no user segments associated to them
  • Display pages that may have categories assigned to them, but don’t have anything to do with user segments
  • Display pages that have user segments selected that match the current user.

SeviceLocator needs to be available to make this work. Edit your portal-ext.properies file with the following properties

velocity.engine.restricted.classes=
velocity.engine.restricted.variables=

To keep the code clean I like to define the main variables I'll be using for the theme in the init_custom.vm file.

init_custom.vm


## -------- Audience Targeting Section -------- ##
#set ($userSegmentLocalService = $serviceLocator.findService("content-targeting-api","com.liferay.content.targeting.service.UserSegmentLocalService"))
#set ($assetCategoryLocalService = $serviceLocator.findService("com.liferay.portlet.asset.service.AssetCategoryLocalService"))
#set ($userSegmentIds = $request.getAttribute("userSegmentIds"))

userSegmentIds returns all userSegmentIds that match the current user using the site.
userSegmentLocalService will be used to get more information about a particular userSegment.
assetCategoryLocalService is used to get the assetCategories for each page. Note that assetCategoryId is different than userSegmentId. Each userSegment has an assetCategoryId associated with it. So these are some of the parts that we will compare next.

navigation.vm

Above is the complete code for navigation.vm – I’m going to explain each part.


#set ($navItemCategoryIds = $assetCategoryLocalService.getCategoryIds("com.liferay.portal.model.Layout", $nav_item.getLayout().getPlid()))

Since the code I added, is inside of the foreach loop for $nav_item it iterates over each navigation item. The above code gets the assetCategoryIds associated with each page.

The first foreach loop iterates over each userSegmentId. Then gets the assetCategoryId associated with each userSegmentId. The second foreach loop iterates over each assetCategoryId for the $nav_item. I used two variables as flags: ($hasUserSegment, $ignoreCategory)


#foreach ($id in $userSegmentIds)
	#set ($userSegmentId = $userSegmentLocalService.getUserSegment($id).getAssetCategoryId())
        #foreach ($catId in $navItemCategoryIds)
           #if ($userSegmentId == $catId)
		#set ($hasUserSegment = true)
		#break
	   #else
	      #if ($userSegmentLocalService.fetchUserSegmentByAssetCategoryId($catId))
		#set ($hasUserSegment = false)
		#break
	      #else
		 #set ($ignoreCategory = true)
		 #break
	      #end
            #end
         #end
#end

The section where the navigation link is displayed. I wrap it within an if statement that checks for pages that have no userSegments or Categories, Pages that matches the current user’s usersegment’s assetCategoryId, and finally display pages that don’t have any userSegments but may have other categories assigned to the page.


#if ($navItemCategoryIds.size() == 0 || $hasUserSegment || $ignoreCategory)

Hope you enjoyed this post. Leave comments below if you have any questions.

The post How to use Liferay Audience Targeting to Control Navigation Elements appeared first on Xtivia.

]]>
http://www.xtivia.com/how-to-use-liferay-audience-targeting-to-control-navigation/feed/ 0
Using Liferay Resources Importer to Develop Site Content http://www.xtivia.com/using-liferay-resources-importer-to-develop-site-content/ http://www.xtivia.com/using-liferay-resources-importer-to-develop-site-content/#comments Wed, 25 May 2016 17:27:34 +0000 http://www.xtivia.com/?p=11558 I’m going to share with you some tips and tricks I’ve used to keep track of changes with web content during the development stage of a project. To set up...

READ MORE

The post Using Liferay Resources Importer to Develop Site Content appeared first on Xtivia.

]]>
I’m going to share with you some tips and tricks I’ve used to keep track of changes with web content during the development stage of a project. To set up the scene, I’m a Front-End developer on a small project with a few back-end developers. The client has a lot of starter content existing in their mock-ups that they want created on the development server and then moved to the QA environment. During these beginning stages of development other developers may want their local environment to be as close to the development server as possible. This is where we can let Resources Importer do the heavy lifting to help us generate a .LAR file that can be imported to each developer’s local environment.

If you are not familiar with resources importer, it allows you to package content, pages, documents, and configurations inside a Liferay theme. Then when the theme is deployed it will create a site template or a site depending on the configuration. In Liferay, web content can be created using structures and templates. As the client gives feedback, these structures and templates may change over time. During the development stage of the project, it’s a good idea to put these structures and templates into your source control system (svn, git).  The resources importer uses a common pattern to organize the files in a way that the dependencies make sense. The folder structure explains which templates are linked to which structures, then which template to use for the web content article.

Ok now I’m going to get into the technical steps to using the resources importer.

Step One – require resources importer dependency and set to developer mode.
/WEB-INF/liferay-plugin-package.properties add the following properties.


required-deployment-contexts=\
resources-importer-web

resources-importer-developer-mode-enabled=true

the developer mode will delete and rebuild the site template when the theme is deployed. (Note: don’t apply the site template to a site until all of the development is done, the developer mode can not delete a site template when it is being used by a site)

Step Two – organize folders (the folders below are for maven)


sample-theme
  - src
    - main
    - resources
      - resources-importer
        - document_library
          - documents
            - journal
              - articles
              - structures
              - templates


 

Step Three – define your pages/layouts in sitemap.json file
Create a sitemap.json file located at sample-theme/src/main/resources/resources-importer/


{
  "layoutTemplateId": "1_column",
  "publicPages": [
    {
      "friendlyURL":"/home",
      "name":"Home",
      "title":"Home"
    }
  ]
}

Step Four – create web content with structures and templates

First create the structure .xml for the web content. Place the file in sample-theme/src/main/resources/resources-importer/journal/structures/Basic Web Content.xml.
When creating the structure you must use Documents and Media for any images in order to reference them later in the journal article. Place the template file within the templates folder and have a folder match the structure name. sample-theme/src/main/resources/resources-importer/journal/templates/Basic Web Content/Basic Web Content.xml

Create the velocity or freemarker template and place in the templates folder and make sure the subfolder name matches the name of the structure. Ex: sample-theme/src/main/resources/resources-importer/journal/templates/Basic Web Content/Basic Web Content.vm

The Article is an xml file that has the structure filled out. The easiest way to get the xml file is create your web content using the structure and template and if resources importer is deployed a download button will be available when you create webcontent. Download this file as a starting point. Place the file in sample-theme/src/main/resources/resources-importer/journal/articles/Basic Web Content/Basic Web Content.xml. The folder Basic Web Content is associated with the template name that should be used.

Images that may be used in web content should be placed in sample-theme/src/main/resources/resources-importer/document_library/documents/
Then in the article xml file if you want to reference an item from Documents and Media use [$FILE=welcome_cube.png$]. Note that the image type structure can not be used with resources importer. It needs to be Media and Document Library item to be referenced in the web content.

Now to get your web content to display on a page. Let’s revisit the sitemap.json which is located in sample-theme/src/main/resources/resources-importer/sitemap.json

{
	"layoutTemplateId": "2_columns_ii",
	"publicPages": [
		{
			"columns": [
				[
					{
						"portletId": "58"
					}
				],
				[
					"Basic Web Content.xml"
				]
			],
			"friendlyURL": "/home",
			"name": "Welcome",
			"title": "Welcome"
		}
	]
}

layoutTemplateId : this corresponds to the layout id. The easiest way to find the layoutid view the layouts using the Edit from the dockbar, then inspect the layout thumbnail image. The name of the image is the layoutid for the OOTB Liferay Layouts.

Columns: has [] for each column of the layout.

“portletId”: “58”: 58 is the portletid of OOTB Sign-in Portlet

“Basic Web Content.xml”: The file name of the Journal Article

A good source to see the latest and greatest features of resources importer is to look in the test-resources-importer-portlet

Step Five – turn off developer mode

/WEB-INF/liferay-plugin-package.properties


resources-importer-developer-mode-enabled=false

Step Six (Optional) – Define the Name of the Site

/WEB-INF/liferay-plugin-package.properties

If you set the resources-importer-target-value then when the theme is deployed the site will be created, If those properties are not added then the resources importer will create a site template based off of the resources.


resources-importer-developer-mode-enabled=false
resources-importer-target-class-name=com.liferay.portal.model.Group
resources-importer-target-value=Sample Site

Advanced Options – Use Groovy Scripts/ Use Continuous Integration

Resources Importer currently doesn’t have every mapping that a lar file will create. But the benefit with building out the resources is that you have the source files and the content will work with most future versions. So functionality that isn’t included with Resources Importer can be completed with groovy scripts, such as creating users and blog posts.

Using Continuous Integration Solution such as Jenkins can be powerful if on the dev server each deployment will delete the site and regenerate the content. That way everyone is able to keep on the same page as content gets updated.

The post Using Liferay Resources Importer to Develop Site Content appeared first on Xtivia.

]]>
http://www.xtivia.com/using-liferay-resources-importer-to-develop-site-content/feed/ 0
Tuning Basic JVM Performance for Liferay DXP 7 http://www.xtivia.com/tuning-basic-jvm-performance-for-liferay-dxp-7/ http://www.xtivia.com/tuning-basic-jvm-performance-for-liferay-dxp-7/#comments Wed, 25 May 2016 14:12:35 +0000 http://www.xtivia.com/?p=14396 Over the nearly 10 years that we have worked with the Liferay platform, we have had ample opportunity to hone our understanding of how Liferay interacts with the Java virtual...

READ MORE

The post Tuning Basic JVM Performance for Liferay DXP 7 appeared first on Xtivia.

]]>
Over the nearly 10 years that we have worked with the Liferay platform, we have had ample opportunity to hone our understanding of how Liferay interacts with the Java virtual machine (JVM), and how to optimally tune the JVM performance for Liferay as a Java application.

Increase JVM Heap Sizing

Out of the box, the Liferay DXP bundle ships with a set of JVM parameters that are well-geared towards development and experimental usage scenarios, but which are not optimal for testing, staging, and production-type environments. One of the first modifications that a well-tuned Liferay environment needs is a change to the default JVM heap sizing. XTIVIA has found that a good starting point for a Liferay implementation in a shared environment is to set the heap size statically at 3 gigabytes, with approximately half of that space used for the young generation, and a 768 megabyte permanent generation. This allows sufficient heap space to allow in-memory caches to operate well, while keeping garbage collection stop-the-world events down to a minimal duration.

To implement this, add or replace the following to the CATALINA_OPTS variable in ${liferay.home}/tomcat-8.0.32/bin/setenv.sh (or ${liferay.home}/tomcat-8.0.32/bin/setenv.bat for a Microsoft Windows environment):

-Xmx3G -Xms3g -XX:NewSize=1536m -XX:MaxNewSize=1536m -XX:MaxPermSize=768m -XX:PermSize=768m

Enable G1GC

Our experience with Liferay has been that garbage collection optimization is a critical part of a well-tuned environment; historically, this has required the use of the concurrent-mark-sweep garbage collection policy, with all of its reliance on fine-tuning to optimize performance. Luckily for us, Liferay DXP version 7.0 has made Java 7 a baseline requirement, which allows us to leverage the far superior G1GC policy developed by Oracle to guarantee superlative performance with a minimum of tuning overhead. At this point, XTIVIA recommends that all of our Liferay implementations leverage the G1GC garbage collection policy; to enable it, add the following to the same CATALINA_OPTS definition in the setenv.sh (or setenv.bat) file referenced above.

-XX:+UseG1GC

If you are using an environment configured with Java 8 (rather than Java 7), add the following to the above statement to leverage String deduplication:

-XX:+UseStringDeduplication

Note that if you enable G1GC, the following JVM settings included above are no longer necessary.

-XX:NewSize=1536m
-XX:MaxNewSize=1536m
-XX:MaxPermSize=768m
-XX:PermSize=768m

One other change that XTIVIA does recommend is enabling JVM garbage collection logging for all installations; details on this will be included in a later blog post.

The post Tuning Basic JVM Performance for Liferay DXP 7 appeared first on Xtivia.

]]>
http://www.xtivia.com/tuning-basic-jvm-performance-for-liferay-dxp-7/feed/ 0
Resolving WebSphere TCP bind failure errors on Linux http://www.xtivia.com/websphere-tcp-socket-bind-failures-linux/ http://www.xtivia.com/websphere-tcp-socket-bind-failures-linux/#comments Wed, 25 May 2016 03:31:50 +0000 http://www.xtivia.com/?p=14269 At times, we have seen situations where WebSphere TCP bind failures can occur during the JVM startup process on Linux. This can occur even when WebSphere is configured to use...

READ MORE

The post Resolving WebSphere TCP bind failure errors on Linux appeared first on Xtivia.

]]>
At times, we have seen situations where WebSphere TCP bind failures can occur during the JVM startup process on Linux. This can occur even when WebSphere is configured to use unique ports; the bind fails with a message in the logs indicating that there is a port conflict issue with the TCPC0003E error code. When this occurs, the IBM Channel framework will retry the bind every 5 seconds for up to 60 attempts; sometimes the TCP port will bind successfully during this retry process, but the desired behavior is that WebSphere consistently bind to the configured TCP port on the first attempt. An example of the log message displayed during a failure of this type is as follows:


[5/16/16 7:07:19:018 EDT] 00000000 TCPPort E TCPC0003E: TCP Channel TCP_2 initialization failed.
The socket bind failed for host * and port 10100. The port may already be in use.
[5/16/16 7:07:19:019 EDT] 00000000 WSChannelFram E CHFW0034W: The Transport Channel Service detected transport chain WCInboundDefaultSecure failed. 
The service will retry to start chain WCInboundDefaultSecure every 5000 milliseconds for up to 60 attempts.

To diagnose and resolve this sort of error, it is necessary to first validate that no other process is listening on the designated TCP port. The netstat utility is especially useful for this; as demonstrated in the example below. This netstat invocation shows there is no pid associated with the bound TCP port, which rules out a port conflict with another process. Here, the TCP state is TIME_WAIT and local/foreign address are 127.0.0.1, which doesn’t look right, as WebSphere by default binds to 0.0.0.0 rather than 127.0.0.1. Also, the local and foreign ports should not be the same; this is indicative of a systemic issue at the OS level.


netstat -anp | grep 10100

#Failure Scenario:
Proto Recv-Q Send-Q Local Address     Foreign Address   State         PID/Program name
tcp   0      0      127.0.0.1:10100   127.0.0.1:10100   TIME_WAIT     -

#Working Scenario:
Proto Recv-Q Send-Q Local Address     Foreign Address   State         PID/Program name
tcp   0      0      0.0.0.0:10100     0.0.0.0:*         LISTEN        12366/java

Linux TCP Tuning

One reason for TCP ports remaining in TIME_WAIT state is due to the OS not cleaning up port references quickly enough; in such a scenario, it is necessary to tweak the wait timeout at the OS level. To check the current TCP wait timeout, check the value contained in the /proc virtual filesystem:


cat /proc/sys/net/ipv4/tcp_fin_timeout
60 #default

In order to lower the tcp_fin_timeout kernel value to 30 seconds for the running OS, the value can be changed in the proc VFS on the fly; for a permanent change in RHEL variants, it is necessary to edit the setting in /etc/sysctl.conf:


#As root, change runtime value
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
or
sysctl -w net.ipv4.tcp_fin_timeout=30

#For permanent change, edit /etc/sysctl.conf
net.ipv4.tcp_fin_timeout=30

Additionally, in order to prevent the issue from occurring again, it is necessary to configure the OS ephemeral port range to omit ports assigned to WebSphere. In our case, WebSphere ports are ranging from 11000-15000 across different environments, so we set local port range to start from 15000 which excludes WebSphere ports from ephemeral range:


cat /proc/sys/net/ipv4/ip_local_port_range
9000    65500 #current

#As root, change runtime value to exclude WebSphere port range.
echo "15000 65500" > /proc/sys/net/ipv4/ip_local_port_range
or
sysctl -w net.ipv4.ip_local_port_range="15000 65500"

#For permanent change, edit /etc/sysctl.conf
net.ipv4.ip_local_port_range=15000 65500

#To load changes from /etc/sysctl.conf
sysctl -p

Next Steps

If after making these changes, the issue still persists, use the soReuseAddr=1 custom property to force TCP channel bind attempt with socket reuse option. This custom property is enabled by default for WAS v8.5.5.1 and later.
In the administrative console, this property should be added in the following location:

For the WebContainer HTTPS Port
        Application servers > server1 > Web container transport chains > WCInboundDefaultSecure > TCP inbound channel (TCP_4) > Custom properties

For the WebContainer HTTP Port
        Application servers > server1 > Web container transport chains > WCInboundDefault > TCP inbound channel (TCP_2) > Custom properties

Additionally, it is recommended to disable IPv6 support for WebSphere by adding the java.net.preferIPv4Stack=true JVM custom property.
In the administrative console, this property should be added in the following location:
        Servers > Server Types > Application servers > server_name > Java and process management > Process definition > Java virtual machine > Custom Properties

Restart the JVM. At this point, the issue should be resolved!

The post Resolving WebSphere TCP bind failure errors on Linux appeared first on Xtivia.

]]>
http://www.xtivia.com/websphere-tcp-socket-bind-failures-linux/feed/ 0
Addresses in Infor CRM / SalesLogix http://www.xtivia.com/addresses-in-infor-crm-saleslogix/ http://www.xtivia.com/addresses-in-infor-crm-saleslogix/#comments Thu, 19 May 2016 21:07:56 +0000 http://www.xtivia.com/?p=14314 You may have noticed that in recent versions, Infor CRM introduced the concept of “Primary”, “Primary Billing” and “Primary Shipping” address. This has been a source of confusion for many...

READ MORE

The post Addresses in Infor CRM / SalesLogix appeared first on Xtivia.

]]>
You may have noticed that in recent versions, Infor CRM introduced the concept of “Primary”, “Primary Billing” and “Primary Shipping” address.

This has been a source of confusion for many long time Infor CRM / SalesLogix users since these were not concepts they were used to.

What actually happens in the background can be a bit confusing so I thought I’d document it here:

Address

  1. Primary – checking this checkbox sets the ADDRESS.PrimaryAddress field = T
  2. Primary Billing – checking this checkbox updates:
    • The ACCOUNT.AddressID field
    • The ADDRESS.ISPRIMARY field = T
  3. Primary Shipping – checking this checkbox updates:
    • The Account.ShippingID field
    • The ADDRESS.ISMailing field = T

Note: If your Infor CRM environment has the Proximity Search add on, this is impacted as well. By default Proximity Search will only return address results based on the Account.ADDRESSID, which also means that only addresses with the “Primary Billing” checkbox checked will be returned in the results.

Also Infor did not update the LAN client with this change, so this is something to be aware of if a customer is using BOTH environments.
On the LAN there’s only a PRIMARY and SHIPPING checkbox and essentially they function the same as item #2 and #3 above respectively.

The post Addresses in Infor CRM / SalesLogix appeared first on Xtivia.

]]>
http://www.xtivia.com/addresses-in-infor-crm-saleslogix/feed/ 0
Infor CRM / SalesLogix – Splitting up the values from a multi-select picklist http://www.xtivia.com/infor-crm-splitting-multiple-picklist-values/ http://www.xtivia.com/infor-crm-splitting-multiple-picklist-values/#comments Fri, 06 May 2016 00:44:42 +0000 http://www.xtivia.com/?p=14150 Customer Question: I have values in a multi-select picklist. Unfortunately many of them no longer exist in my picklist definition. How do I clean these up? Answer: The challenge here...

READ MORE

The post Infor CRM / SalesLogix – Splitting up the values from a multi-select picklist appeared first on Xtivia.

]]>
Customer Question: I have values in a multi-select picklist. Unfortunately many of them no longer exist in my picklist definition. How do I clean these up?

Answer: The challenge here is that these values are saved as comma separate values in a single field in the database.
For example the Account Industry value in the database field could be: Aerospace, Automotive, Information Technology

To separate them and compare them to the master Account Industry picklist definition, it took some creative SQL.

Step 1: Define a SQL Function that Splits the values

CREATE FUNCTION [dbo].[Split]
(
@String NVARCHAR(4000),
@Delimiter NCHAR(1)
)
RETURNS TABLE
AS
RETURN
(
WITH Split(stpos,endpos)
AS(
SELECT 0 AS stpos, CHARINDEX(@Delimiter,@String) AS endpos
UNION ALL
SELECT endpos+1, CHARINDEX(@Delimiter,@String,endpos+1)
FROM Split
WHERE endpos > 0
)
SELECT 'Id' = ROW_NUMBER() OVER (ORDER BY (SELECT 1)),
'Data' = SUBSTRING(@String,stpos,COALESCE(NULLIF(endpos,0),LEN(@String)+1)-stpos)
FROM Split
)
GO

Step 2: Use this Function to create a memory table called #tempPicklistItems and populate it using a cursor.
All credit for this code goes to Adam Pinilla – SQL Master!!


CREATE TABLE #tempPicklistItems (

ACCOUNTID CHAR(12),
PICKLISTTEXT VARCHAR(150)
)

DECLARE @accountid CHAR(12)
DECLARE @accounttext VARCHAR(150)

DECLARE picklist CURSOR
FOR Select ACCOUNTID, INDUSTRY FROM sysdba.ACCOUNT where ISNULL(INDUSTRY,'')''
OPEN picklist

FETCH NEXT FROM picklist INTO @accountid, @accounttext

WHILE @@FETCH_STATUS = 0
BEGIN

insert into #tempPicklistItems
select @accountid, ltrim(rtrim(DATA))
from dbo.Split(@accounttext,',')

FETCH NEXT FROM picklist INTO @accountid, @accounttext
END

Close picklist;
DEALLOCATE picklist;

Step 3: Now that we have our memory table, we can join to it and display our results!


select a.ACCOUNTID,ACCOUNT,p.PICKLISTTEXT from sysdba.ACCOUNT a inner join #tempPicklistItems p on p.ACCOUNTID=a.accountid

Caution: Remember that this is a memory table and it will only work in the same session! It will not be available when you open a new SQL Management Studio window.

Hope this is useful the next time someone needs to clean up data in multi-select picklists!

The post Infor CRM / SalesLogix – Splitting up the values from a multi-select picklist appeared first on Xtivia.

]]>
http://www.xtivia.com/infor-crm-splitting-multiple-picklist-values/feed/ 0
DB2 SQL: Rewriting a Distinct with a Correlated Sub-Query to a Group By for Performance Improvement http://www.xtivia.com/db2-sql-rewriting-a-distinct-with-a-correlated-sub-query-to-a-group-by-for-performance-improvement/ http://www.xtivia.com/db2-sql-rewriting-a-distinct-with-a-correlated-sub-query-to-a-group-by-for-performance-improvement/#comments Thu, 05 May 2016 11:00:15 +0000 http://www.xtivia.com/?p=13967 Sometimes a client calls for help with a performance problem. On one particular Tuesday, a client called about a long-running query. Actually they basically said “What’s wrong with DB2? my...

READ MORE

The post DB2 SQL: Rewriting a Distinct with a Correlated Sub-Query to a Group By for Performance Improvement appeared first on Xtivia.

]]>
Sometimes a client calls for help with a performance problem. On one particular Tuesday, a client called about a long-running query. Actually they basically said “What’s wrong with DB2? my query is running long! Are there lock timeouts happening?”
The query came from a new application implemented just the weekend before. When I asked how long the query was running, the answer was “more than the three-minute time-out”. This is a transaction processing database, so three minutes is generally not acceptable.

The Query

The query in this case was amazingly simple – joining only three tables:

select distinct(driver_id),
    (select max(pos_timestamp) from schema1.position as pos2
            where pos2.driver_id = pos.driver_id) as pos_timestamp,
    (select max(pos_timestamp) from schema1.breadcrumb as bc
            where bc.driver_id = pos.driver_id) as last_breadcrumb_ts
    from schema1.position as pos where pos_timestamp > current_timestamp - 6 hours
    order by pos_timestamp

Explaining this statement gives a fairly ugly explain plan, although still simple:

Access Plan:
-----------
	Total Cost: 		1.16818e+006
	Query Degree:		1

                                               Rows 
                                              RETURN
                                              (   1)
                                               Cost 
                                                I/O 
                                                |
                                               5034 
                                              TBSCAN
                                              (   2)
                                           1.16818e+006 
                                              125360 
                                                |
                                               5034 
                                              SORT  
                                              (   3)
                                           1.16818e+006 
                                              125360 
                                                |
                                              27300.6 
                                              NLJOIN
                                              (   4)
                                           1.16817e+006 
                                              125360 
                                     /----------+----------\
                                 27300.6                      1 
                                 NLJOIN                    GRPBY 
                                 (   5)                    (  12)
                              1.08987e+006                 529.777 
                                 125115                      245 
                           /-------+--------\                |
                       27300.6                 1           1.42729 
                       FETCH                GRPBY          TBSCAN
                       (   6)               (  10)         (  13)
                       2612.27              39.9434        529.777 
                       871.255              4.55094          245 
                    /----+----\               |              |
                27300.6    9.61044e+007     19091.1         7185 
                RIDSCN   TABLE: SCHEMA1     IXSCAN    TABLE: SCHEMA1
                (   7)       POSITION       (  11)       BREADCRUMB
                905.859         Q7           1534            Q1
                102.546                     174.777 
                  |                           |
                27300.6                  9.61044e+007 
                SORT                   INDEX: SCHEMA1
                (   8)                  POS_DRIVER_NDX
                905.859                       Q4
                102.546 
                  |
                27300.6 
                IXSCAN
                (   9)
                899.457 
                102.546 
                  |
             9.61044e+007 
           INDEX: SCHEMA1
 IDX_POSITION__POS_TIMESTAMP_03062015
                  Q7

Looking at this explain plan, we can see that most of the expense comes in with operator #5 – an NLJOIN that is joining the POSITION table to ITSELF.

Rewriting

I immediately thought that rewriting might help this particular query significantly. If I could just make that join more efficient somehow. Both accesses to the table were through indexes, and one of them through index-only access.

I first tried to break out the correlated subqueries to Common Table Expressions(CTE). This bumped up my timeron count to over 5 million – 5 times worse than the original. We can’t all come up with the perfect answer the first time. Then as I was continuing to look at the query, I realized that I could write the distinct as a group-by instead. I rewrote the query to this:

select pos.driver_id
        , max(pos_timestamp) as max_pos_timestamp
        ,(select max(pos_timestamp) from schema1.breadcrumb as bc
            where bc.driver_id = pos.driver_id) as last_breadcrumb_ts
    from schema1.position as pos
    where pos_timestamp > current_timestamp - 6 hours
    group by pos.driver_id
    order by max_pos_timestamp

In every scenario that I could come up with, the results from the two queries were the same. I asked the application owner to verify that the results of this other way of writing the query were indeed what he needed.

The cost of the second query was just 17,489 Timerons! That was a 98.5% reduction in the cost of the query. Here’s what the explain plan looks like for the rewritten query:

Access Plan:
-----------
	Total Cost: 		17489.1
	Query Degree:		1

                               Rows 
                              RETURN
                              (   1)
                               Cost 
                                I/O 
                                |
                               5034 
                              NLJOIN
                              (   2)
                              17489.1 
                              1116.25 
                           /----+-----\
                        5034             1 
                       TBSCAN         GRPBY 
                       (   3)         (  12)
                       2622.28        529.777 
                       871.255          245 
                         |              |
                        5034          1.42729 
                       SORT           TBSCAN
                       (   4)         (  13)
                       2622.12        529.777 
                       871.255          245 
                         |              |
                        5034           7185 
                       GRPBY     TABLE: SCHEMA1
                       (   5)       BREADCRUMB
                       2620.41          Q1
                       871.255 
                         |
                        5034 
                       TBSCAN
                       (   6)
                       2620.24 
                       871.255 
                         |
                        5034 
                       SORT  
                       (   7)
                       2620.07 
                       871.255 
                         |
                       27300.6 
                       FETCH 
                       (   8)
                       2612.27 
                       871.255 
                    /----+----\
                27300.6    9.61044e+007 
                RIDSCN   TABLE: SCHEMA1
                (   9)       POSITION
                905.859         Q4
                102.546 
                  |
                27300.6 
                SORT  
                (  10)
                905.859 
                102.546 
                  |
                27300.6 
                IXSCAN
                (  11)
                899.457 
                102.546 
                  |
             9.61044e+007 
           INDEX: SCHEMA1
 IDX_POSITION__POS_TIMESTAMP_03062015
                  Q4

According to the DB2 index advisor, there are also indexes that I can add to reduce each side of the explain to index-only access, further reducing the cost by about 80%.

Summary

Some say that the DB2 optimizer is so good that you don’t have to rewrite queries. A significant portion of the time I find that to be true, but there are some edge cases like this type of query that are the exception. An SQL code review with a talented DB2 DBA can cut minutes or hours off of query execution time.

The post DB2 SQL: Rewriting a Distinct with a Correlated Sub-Query to a Group By for Performance Improvement appeared first on Xtivia.

]]>
http://www.xtivia.com/db2-sql-rewriting-a-distinct-with-a-correlated-sub-query-to-a-group-by-for-performance-improvement/feed/ 0
Analyzing Lock Escalation http://www.xtivia.com/analyzing-lock-escalation/ http://www.xtivia.com/analyzing-lock-escalation/#comments Thu, 21 Apr 2016 12:00:49 +0000 http://www.xtivia.com/?p=13485 What is Lock Escalation? When LOCKLIST and MAXLOCKS are not set to AUTOMATIC, or total system memory is constrained, lock escalation can occur. When there is not enough room in...

READ MORE

The post Analyzing Lock Escalation appeared first on Xtivia.

]]>
What is Lock Escalation?

When LOCKLIST and MAXLOCKS are not set to AUTOMATIC, or total system memory is constrained, lock escalation can occur. When there is not enough room in the lock list to store additional locks that an application needs to continue processing, lock escalation occurs. DB2 will attempt to satisfy queries and perform updates, inserts, and deletes by locking at the row level. Each row level lock requires a certain amount of memory that varies by the version of DB2. The IBM DB2 Knowledge Center page for the LOCKLIST parameter spells out how much memory each row lock takes. When there is no more memory available in the lock list for additional locks, or when a single application reaches the percentage of the lock list defined by MAXLOCKS, DB2 performs what is called lock escalation. This means that applications with row-level locks instead try to acquire table-level locks.

The Problem

The table-level locks acquired with lock escalation are much worse for concurrency. Now instead of only individual rows being unavailable for reads, updates, and deletes (depending on the isolation levels of the applications involved and the work being performed), the entire table may be unavailable. This can lead to other negative locking phenomena, including longer lock-wait times, lock timeouts, and even deadlocks.

Analyzing the Problem

Lock escalation is one of the more negative things that can happen with DB2’s concurrency control. It is something DB2 databases should be monitored for and that should be addressed if it occurs on an ongoing basis. Lock escalation is documented in the DB2 diagnostic log, and this is one of the better places to look for it. Once my diagnostic log parser alerts me that there is lock escalation occurring, I spend some time analyzing to see which databases (if more than one on the instance) and which times it is occurring at. The db2diag tool is a powerful tool in this analysis. The following syntax will list out occurrences of lock escalation, including the database name and time stamp:

$ db2diag -g message:=scalation -fmt '%ts %db %errname'
2016-03-28-01.13.34.680662 PRODM
2016-03-28-01.13.34.681123 PRODM
2016-03-28-01.14.41.746583 PRODM
2016-03-28-01.14.41.747016 PRODM
2016-03-28-01.16.28.127806 PRODM
2016-03-28-01.16.28.128327 PRODM
2016-03-28-01.17.20.249458 PRODM
2016-03-28-01.17.20.250037 PRODM
2016-03-28-02.45.10.337993 PRODM
2016-03-28-02.45.10.338500 PRODM
2016-03-28-02.45.46.461853 PRODM
2016-03-28-02.45.46.462300 PRODM
...

This is a bit messier than I would like it to be, but when using db2diag, for some reason, the errono field is not populated for lock escalations. You can get the same info from SYSIBMADM.PDLOGMSGS_LAST24HOURS or the table function PD_GET_LOG_MSGS, where oddly enough the msgnum field IS populated:

select timestamp
    , substr(dbname,1,12) as dbname 
from sysibmadm.PDLOGMSGS_LAST24HOURS 
where msgnum=5502 
with ur

TIMESTAMP                  DBNAME
-------------------------- ------------
2016-03-28-12.20.25.549646 PRODM
2016-03-28-12.20.24.804685 PRODM
2016-03-28-12.20.14.725929 PRODM
2016-03-28-12.20.02.290882 PRODM
...

Analyzing the timing of lock escalation events can be quite useful to determine if perhaps there is an application that is using a higher isolation level and also if there may be missing indexes for the workload. There is also a lot more detailed information in the MSG field of SYSIBMADM.PDLOGMSGS_LAST24HOURS or PD_GET_LOG_MSGS – which may include the application name, the specific SQL being executed, and other details.

Resolving the Problem

The most obvious solution here is to increase the size of LOCKLIST in the db cfg using syntax like this:

db2 update db cfg for PRODM using LOCKLIST 30000

It is also possible that the MAXLOCKS parameter may need to be adjusted. Both of these parameters can be set to AUTOMATIC and tuned by STMM(Self Tuning Memory Manager). In fact, these are the two parameters I’m most likely to include in STMM tuning because the impact of having them too small can be so high, and because from what I’ve seen, DB2 seems to do a good job of tuning them.

The post Analyzing Lock Escalation appeared first on Xtivia.

]]>
http://www.xtivia.com/analyzing-lock-escalation/feed/ 0