We all worry about the performance of our servers.  It is what separates the DBA’s from the pretenders.  You just never know just what setting will be the one that makes the big difference.  The worst part is, the setting that works today may hinder or not even matter tomorrow or even on another identically configured system.  This is clearly the most frustrating part of our job.

Recently, we saw an issue that was just different enough  that I felt it needed to be shared because very few would think to even look there.  Here is the story and hopefully this will help someone else out there.

Client connections were taking quite a bit of time.  The decision was made to increase the number of poll threads to aid in clearing this issue up.  It cleared that issue up, but now the overall performance had become really bad.  Data files started backing up over next 3 days.

Here is what we discovered.  Prior to the change, there were 4 CPU VPs and 3 poll threads running in-line on those.  After the change, now all CPU VPs were running a poll thread in-line.  The onstat -g sch, “thread migration statistics” (bottom part of the output) showed that the CPU VPs where doing more than twice as much work on the in-line poll threads as they were doing for the rest of their designated tasks.  We moved the poll threads to NET VPs, and now everything is faster.  It took only a few hours to catch up from that backup that took 3 days to build up.

So even though IBM Informix documents that running poll threads on the CPU VPs makes them more responsive, you may find that if your system is busy enough that this can affect performance adversely by simply overloading the CPU VPs.  Don’t be afraid to move them to NET VPs, simply to free up the CPUs for more important work, like transactions. Things like this is what makes performance tuning an iterative process that is pretty much an art and why we help clients with their Informix needs.

Share This