We all worry about the performance of our servers.  It is what separates the DBAs from the pretenders.  You just never know just what setting will be the one that makes the big difference.  The worst part is that the setting that works today may hinder or not even matter tomorrow ,or even on another identically configured system.  This is clearly the most frustrating part of our job.

Recently, we encountered an issue that was just different enough that I felt it needed to be shared, as very few would think to look there.  Here is the story, and hopefully this will help someone else out there.

Client connections were taking a considerable amount of time.  The decision was made to increase the number of poll threads to help clear up this issue.  It cleared up that issue, but now the overall performance has become really bad.  Data files started backing up over the next 3 days.

Here is what we discovered.  Prior to the change, there were 4 CPU VPs and 3 poll threads running in line on those.  After the change, now all CPU VPs were running a poll thread in-line.  The onstat -g sch, “thread migration statistics” (bottom part of the output) showed that the CPU VPs were doing more than twice as much work on the in-line poll threads as they were doing for the rest of their designated tasks.  We moved the poll threads to NET VPs, and now everything is faster.  It took only a few hours to catch up after the backup that had taken three days to build up.

Therefore, although IBM Informix documents suggest that running poll threads on the CPU VPs makes them more responsive, you may find that if your system is busy enough, this can adversely affect performance by simply overloading the CPU VPs.  Don’t be afraid to move them to NET VPs, simply to free up the CPUs for more important work, like transactions. Things like this are what make performance tuning an iterative process that is, in many ways, an art, and why we help clients with their Informix needs.