Xtivia http://www.xtivia.com Your Trusted Technology Partner Fri, 22 Apr 2016 16:25:51 +0000 en-US hourly 1 Analyzing Lock Escalation http://www.xtivia.com/analyzing-lock-escalation/ http://www.xtivia.com/analyzing-lock-escalation/#comments Thu, 21 Apr 2016 12:00:49 +0000 http://www.xtivia.com/?p=13485 What is Lock Escalation? When LOCKLIST and MAXLOCKS are not set to AUTOMATIC, or total system memory is constrained, lock escalation can occur. When there is not enough room in...

READ MORE

The post Analyzing Lock Escalation appeared first on Xtivia.

]]>
What is Lock Escalation?

When LOCKLIST and MAXLOCKS are not set to AUTOMATIC, or total system memory is constrained, lock escalation can occur. When there is not enough room in the lock list to store additional locks that an application needs to continue processing, lock escalation occurs. DB2 will attempt to satisfy queries and perform updates, inserts, and deletes by locking at the row level. Each row level lock requires a certain amount of memory that varies by the version of DB2. The IBM DB2 Knowledge Center page for the LOCKLIST parameter spells out how much memory each row lock takes. When there is no more memory available in the lock list for additional locks, or when a single application reaches the percentage of the lock list defined by MAXLOCKS, DB2 performs what is called lock escalation. This means that applications with row-level locks instead try to acquire table-level locks.

The Problem

The table-level locks acquired with lock escalation are much worse for concurrency. Now instead of only individual rows being unavailable for reads, updates, and deletes (depending on the isolation levels of the applications involved and the work being performed), the entire table may be unavailable. This can lead to other negative locking phenomena, including longer lock-wait times, lock timeouts, and even deadlocks.

Analyzing the Problem

Lock escalation is one of the more negative things that can happen with DB2’s concurrency control. It is something DB2 databases should be monitored for and that should be addressed if it occurs on an ongoing basis. Lock escalation is documented in the DB2 diagnostic log, and this is one of the better places to look for it. Once my diagnostic log parser alerts me that there is lock escalation occurring, I spend some time analyzing to see which databases (if more than one on the instance) and which times it is occurring at. The db2diag tool is a powerful tool in this analysis. The following syntax will list out occurrences of lock escalation, including the database name and time stamp:

$ db2diag -g message:=scalation -fmt '%ts %db %errname'
2016-03-28-01.13.34.680662 PRODM
2016-03-28-01.13.34.681123 PRODM
2016-03-28-01.14.41.746583 PRODM
2016-03-28-01.14.41.747016 PRODM
2016-03-28-01.16.28.127806 PRODM
2016-03-28-01.16.28.128327 PRODM
2016-03-28-01.17.20.249458 PRODM
2016-03-28-01.17.20.250037 PRODM
2016-03-28-02.45.10.337993 PRODM
2016-03-28-02.45.10.338500 PRODM
2016-03-28-02.45.46.461853 PRODM
2016-03-28-02.45.46.462300 PRODM
...

This is a bit messier than I would like it to be, but when using db2diag, for some reason, the errono field is not populated for lock escalations. You can get the same info from SYSIBMADM.PDLOGMSGS_LAST24HOURS or the table function PD_GET_LOG_MSGS, where oddly enough the msgnum field IS populated:

select timestamp
    , substr(dbname,1,12) as dbname 
from sysibmadm.PDLOGMSGS_LAST24HOURS 
where msgnum=5502 
with ur

TIMESTAMP                  DBNAME
-------------------------- ------------
2016-03-28-12.20.25.549646 PRODM
2016-03-28-12.20.24.804685 PRODM
2016-03-28-12.20.14.725929 PRODM
2016-03-28-12.20.02.290882 PRODM
...

Analyzing the timing of lock escalation events can be quite useful to determine if perhaps there is an application that is using a higher isolation level and also if there may be missing indexes for the workload. There is also a lot more detailed information in the MSG field of SYSIBMADM.PDLOGMSGS_LAST24HOURS or PD_GET_LOG_MSGS – which may include the application name, the specific SQL being executed, and other details.

Resolving the Problem

The most obvious solution here is to increase the size of LOCKLIST in the db cfg using syntax like this:

db2 update db cfg for PRODM using LOCKLIST 30000

It is also possible that the MAXLOCKS parameter may need to be adjusted. Both of these parameters can be set to AUTOMATIC and tuned by STMM(Self Tuning Memory Manager). In fact, these are the two parameters I’m most likely to include in STMM tuning because the impact of having them too small can be so high, and because from what I’ve seen, DB2 seems to do a good job of tuning them.

The post Analyzing Lock Escalation appeared first on Xtivia.

]]>
http://www.xtivia.com/analyzing-lock-escalation/feed/ 0
Timing your SQL Server Maintenance Jobs http://www.xtivia.com/timing-your-sql-server-maintenance-jobs/ http://www.xtivia.com/timing-your-sql-server-maintenance-jobs/#comments Wed, 20 Apr 2016 09:53:55 +0000 http://www.xtivia.com/?p=13978 Maintenance in General is a necessity for SQL Server. No different than changing the oil in your car or going to the Doctor for the annual exam. There is going...

READ MORE

The post Timing your SQL Server Maintenance Jobs appeared first on Xtivia.

]]>
Maintenance in General is a necessity for SQL Server. No different than changing the oil in your car or going to the Doctor for the annual exam. There is going to be times when you are going to need to run maintenance on your server. The tricky part is trying to determine when you should start the maintenance jobs before the busy time. For example, what if you need to backup your database, then re-index your database and follow it up with a consistency check.
The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time of each job to give the job enough time to execute, before starting the next job. The problem with this method is you are hoping the first job in the chain completes on time before you start the second job. The common way to avoid this is leaving gaps so one long running job does not step on the next job. However, there are options…
Option 1
If you are using a maintenance plan you can keep all the tasks that are schedule to run at the same time in the same sub-plan. Sometimes this does not provide the flexibility that individuals want, but it is an effective method.
Option 2
You can create multiple steps to a single job. If we use the example above where you want to run a backups, than re-index and then DBCC, you can create 3 different steps, this way as soon as one step completes the next step is executed. This method removes the need for guessing when one job would finish and the next job start.
Option 3
Each task could have its own job, then the last step of each job would start the next job. This will add a lot of flexibility to your maintenance. I like to use this in a couple different kinds of situations.
1. If your maintenance is done by using multiple tools, for example… a Red Gate Backup, a custom re-indexing plan and a simple t-sql script to run a consistency check.
2. If your maintenance is done across multiple servers… If you have 3 servers that all backup to the same network share, you could have one server execute at a time to not clog up the network and the storage.
Adding a step to execute the next job is pretty simple.
exec sp_start_job @job_name=N’My Job Name’

If you need to schedule this to occur across server, you can simply make the call to the other server using a linked server.
I hope this tip has helped you in one fashion or another. If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

The post Timing your SQL Server Maintenance Jobs appeared first on Xtivia.

]]>
http://www.xtivia.com/timing-your-sql-server-maintenance-jobs/feed/ 0
Stopping your SQL Server Jobs http://www.xtivia.com/stopping-your-sql-server-jobs/ http://www.xtivia.com/stopping-your-sql-server-jobs/#comments Wed, 13 Apr 2016 20:45:25 +0000 http://www.xtivia.com/?p=13970 Maintenance in General is a necessity for SQL Server. No different than changing the oil in your car or going to the Doctor for the annual exam. There are going...

READ MORE

The post Stopping your SQL Server Jobs appeared first on Xtivia.

]]>
Maintenance in General is a necessity for SQL Server. No different than changing the oil in your car or going to the Doctor for the annual exam. There are going to be times when you need to perform maintenance on your server. The tricky part is trying to determine when you should start the maintenance so that it completes before the busy time. The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time to give the job enough time to execute. There is another way…

SQL Server has a number of system stored procedures that you can use to perform tasks that you might be doing in the user interface, for example… If you want to stop a job you can open SQL Server Management Studio, navigate to the job, right click and stop the job. Here is where the system supplied stored procedure comes into play. What if your busy time of the day is at 6 AM, and you want to make sure that the indexing has finished by 5:00 AM so that the system is ready to take on the day. Do you really want to wake up at5:00 AM just to right click and stop job, in the chance that it is running?

Simply schedule a job that will execute at 5:00 AM (the time you want to make sure the maintenance job is done by), and create a step that will stop the job.

exec sp_stop_job @job_name=N’My Job Name’

Fairly simple. But, what if you want to add some logic to the job so that not only does it just try to stop the job it will check the job to determine if it is executing first? And now that we are looking at some of the options there are, we should put a line of code in there that will email us whenever the maintenance job has run long and had to be stopped.

Select name
from msdb..sysjobs j
join msdb..sysjobactivity a on j.job_id = a.job_id and j.name = ‘My Job Name’
Where start_execution_date is not null and stop_execution_date is null
If @@rowcount > 0
Begin
EXEC msdb.dbo.sp_stop_job @job_name = ‘My Job Name’
EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘MyMailProfile’, @recipients = ‘Me@xtivia.com’,
@body = ‘The Indexing Rebuild Job had to be stopped due to long run time.’, @subject = ‘Index Rebuild’ ;
End
Else Return

I hope this tip has helped you in one fashion or another. If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

The post Stopping your SQL Server Jobs appeared first on Xtivia.

]]>
http://www.xtivia.com/stopping-your-sql-server-jobs/feed/ 0
Dropping a DB2 Schema and Objects it Contains http://www.xtivia.com/dropping-a-db2-schema-and-objects-it-contains/ http://www.xtivia.com/dropping-a-db2-schema-and-objects-it-contains/#comments Thu, 07 Apr 2016 13:00:07 +0000 http://www.xtivia.com/?p=13248 It may be something that’s not terribly common to have to do, but that makes it all the more important that we document how to do it. Dropping a schema...

READ MORE

The post Dropping a DB2 Schema and Objects it Contains appeared first on Xtivia.

]]>
It may be something that’s not terribly common to have to do, but that makes it all the more important that we document how to do it. Dropping a schema and all of the objects in that schema used to be tedious and time-consuming. We used to have to look for a large number of different types of objects and individually drop each one before we were unable to drop the schema itself. In DB2 9.5, IBM introduced ADMIN_DROP_SCHEMA to help with this.

Identifying Objects in a Schema

It is best to first list out the objects in a schema so you can communciate with others precisely what is being dropped, or at least record it before the schema is dropped. Also in DB2 9.5, the administrative view SYSIBMADM.OBJECTOWNERS was introduced. This joins together all the various system tables that list the various kinds of objects so we have one location to list the various kinds of objects. It is easy to query to find the objects in a particular schema, and lists the schema itself as an object:

select substr(OWNER,1,12) as OWNER
    , OWNERTYPE
    , substr(OBJECTNAME,1,30) as OBJECTNAME
    , OBJECTTYPE 
from SYSIBMADM.OBJECTOWNERS 
where OBJECTSCHEMA='SSIRS_AGENCY' 
with ur;

OWNER     OWNERTYPE OBJECTNAME               OBJECTTYPE              
--------- --------- ------------------------ ------------------------
SYSIBM    S         SSIR_DMART               SCHEMA                  
DB2BCUP   U         SQL150114132019140       TABLE CONSTRAINT        
DB2BCUP   U         INDV_AID_CD_DM           TABLE                   

  3 record(s) selected.

In the above SQL, you would obviously have to replace the schema name with the name of the schema you are working with.

Backout Planning

Like any good DBA, I first have a back out plan for every change I perform, and this is no different. Here is the data to collect before dropping a schema:

  • db2look with syntax for the whole database
  • List of objects in the schema from SYSIBMADM.OBJECTOWNERS
  • Count of rows in all tables in the schema from SYSCAT.TABLES
  • Exported data from the tables in the schema in del and/or ixf formats

My many-layered back out options are:

  1. Re-create the objects from the db2look ddl
  2. Import/load the data into the tables from the del files
  3. Create the tables and import data in from the ixf files
  4. Last Resort: restore a backup somewhere else and export out what I might have missed

Actually Dropping the Objects and Schema

Like many DBA tasks, there is more effort in planning and preparing the backout plan than there is in the actual work. Actually dropping the schema and all objects it contains is accomplished like this:

$ db2 "call ADMIN_DROP_SCHEMA('SSIRS_AGENCY',NULL,'DBA','DRP_TAB_ERROR')"

  Value of output parameters
  --------------------------
  Parameter Name  : ERRORTABSCHEMA
  Parameter Value : -

  Parameter Name  : ERRORTAB
  Parameter Value : -

  Return Status = 0
$ db2 "select * from DBA.DRP_TAB_ERROR"
SQL0204N  "DBA.DRP_TAB_ERROR" is an undefined name.  SQLSTATE=42704


The final select is done to ensure that no errors were generated.
I also verify there is noting left in the schema like this:

$ db2 "select * from SYSIBMADM.OBJECTOWNERS where OBJECTSCHEMA='SSIRS_AGENCY' with ur"

OWNER                                                                                                                            OWNERTYPE OBJECTNAME                                                                                                                       OBJECTSCHEMA                                                                                                                     OBJECTTYPE              
-------------------------------------------------------------------------------------------------------------------------------- --------- -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ------------------------

  0 record(s) selected.

The post Dropping a DB2 Schema and Objects it Contains appeared first on Xtivia.

]]>
http://www.xtivia.com/dropping-a-db2-schema-and-objects-it-contains/feed/ 0
Handling transactions across multiple Liferay API calls http://www.xtivia.com/handling-transactions-across-multiple-liferay-api-calls/ http://www.xtivia.com/handling-transactions-across-multiple-liferay-api-calls/#comments Thu, 31 Mar 2016 19:09:24 +0000 http://www.xtivia.com/?p=13555 There are times when you want to handle multiple Liferay API calls as a single, atomic transaction. For example, let’s say you have a business process that has the following...

READ MORE

The post Handling transactions across multiple Liferay API calls appeared first on Xtivia.

]]>
There are times when you want to handle multiple Liferay API calls as a single, atomic transaction. For example, let’s say you have a business process that has the following 3 steps:

  1. Create an Organization.
  2. Add a Role to the User.
  3. Update the User’s status.

Let’s say steps 1 & 2 complete successfully, but step 3 fails for some reason. In this case, you should roll back the previous 2 steps.

To do this in Liferay, simply create a new Service Builder project and add all your API calls inside the *ServiceImpl class and make sure the method throws either a SystemException or a PortalException. No other config changes are needed to package these calls into a single transaction.

Here is an example below.
For testing, if “bad” is passed in for the username, the method will throw a SystemException. This will cause the previous statements in the method to rollback. Of course, remove that statement before deploying this code.

public void update(User user, long[] roleIds, String orgName)
throws SystemException, PortalException {
long orgId = CounterLocalServiceUtil.increment(Organization.class.getName());
Organization org = OrganizationLocalServiceUtil.getService().
createOrganization(orgId);
org.setName(orgName);
OrganizationLocalServiceUtil.getService().updateOrganization(org);
UserLocalServiceUtil.getService().updateUser(user);
if (user.getFirstName().equals("bad")) {
throw new SystemException("Exception thrown due to firstName = 'bad'");
}
RoleLocalServiceUtil.getService().addUserRoles(user.getUserId(), roleIds);
}

The post Handling transactions across multiple Liferay API calls appeared first on Xtivia.

]]>
http://www.xtivia.com/handling-transactions-across-multiple-liferay-api-calls/feed/ 0
SQL2032N When Backing Up a DB2 Database http://www.xtivia.com/sql2032n-when-backing-up-a-db2-database/ http://www.xtivia.com/sql2032n-when-backing-up-a-db2-database/#comments Thu, 31 Mar 2016 11:00:35 +0000 http://www.xtivia.com/?p=12299 When taking a backup of a DB2 LUW database, there are a lot of options to choose from. I like to specify as many as possible to make it clear...

READ MORE

The post SQL2032N When Backing Up a DB2 Database appeared first on Xtivia.

]]>
When taking a backup of a DB2 LUW database, there are a lot of options to choose from. I like to specify as many as possible to make it clear exactly what I want, even if I am specifying the defaults. Though I very rarely specify the number or size of backup buffers, as DB2 does a decent job for me there.

The Problem

Sometimes the options I specify are incorrect. Most commonly, this happens when I specify online options for an offline backup. SQL2032 usually means you specified an option for the backup that is not valid for that backup type. Usually, I get it when I specify this syntax:

 > db2 force applications all; db2 deactivate db sample; db2 backup db sample to /db2fs/backups compress include logs without prompting
DB20000I  The FORCE APPLICATION command completed successfully.
DB21024I  This command is asynchronous and may not be effective immediately.

DB20000I  The DEACTIVATE DATABASE command completed successfully.
SQL2032N  The "iOptions" parameter is not valid.

The problem here is that I have specified the “include logs” keywords on an offline backup. Include logs is the default after DB2 8.2, and is intended for online backups to include the log files that were written to during the duration of the backup so that a rollforward to a minimum point in time is possible. Prior to DB2 8.2, if you did not either include these logs or manage their retention somewhere else, you could have a backup image that you were unable to restore because you did not have the log files for the minimum point in time recovery. Because of this, I am programmed to include these keywords. However, for an offline backup, there is no database activity while the backup is occuring, so there are no log files to include. Thus DB2 does not allow you to specify them in the backup command.

The Solution

The solution here is easy – just specify valid backup syntax. In this case, I simply change my backup syntax to this:

 > db2 force applications all; db2 deactivate db sample; db2 backup db sample to /db2fs/backups compress without prompting

And the backup is able to run and complete successfully.

The post SQL2032N When Backing Up a DB2 Database appeared first on Xtivia.

]]>
http://www.xtivia.com/sql2032n-when-backing-up-a-db2-database/feed/ 0
ENOMEM by db2fmp in db2diag.log http://www.xtivia.com/enomem-by-db2fmp-in-db2diag-log/ http://www.xtivia.com/enomem-by-db2fmp-in-db2diag-log/#comments Thu, 17 Mar 2016 11:00:43 +0000 http://www.xtivia.com/?p=12283 I can’t remember the last time a ulimit bit me, but the time has come again. Everyone is used to removing all the limits for the instance ID at this...

READ MORE

The post ENOMEM by db2fmp in db2diag.log appeared first on Xtivia.

]]>
I can’t remember the last time a ulimit bit me, but the time has come again. Everyone is used to removing all the limits for the instance ID at this point, I hope, but have you ever considered the ulimits for your fenced user id?

The Problem

In the DB2 diagnostic log, I saw error messages like this for 3 out of 4 databases on a fairly new server, occuring multiple times a day:


2016-01-12-17.25.31.588169-300 E302959A3363         LEVEL: Error (OS)
PID     : 36372558             TID : 4627           PROC : db2fmp (C) 0
INSTANCE: db2inst1             NODE : 000           DB   : SAMPLE
APPID   : 192.0.2.0.42856.151231085622
HOSTNAME: server1
EDUID   : 4627                 EDUNAME: db2fmp (C) 0
FUNCTION: DB2 UDB, SQO Memory Management, sqloLogMemoryCondition, probe:100
CALLED  : OS, -, malloc
OSERR   : ENOMEM (12) "There is not enough memory available now."
MESSAGE : Private memory and/or virtual address space exhausted, or data ulimit
          exceeded
DATA #1 : Soft data resource limit, PD_TYPE_RLIM_DATA_CUR, 8 bytes

Not only is the ENOMEM a critical element in this error message relating to the problem I’m describing, but the fact that it’s coming from the process db2fmp. Also critical is the fact that this server is not experiencing memory pressure or memory misconfiguration problems. If it’s coming from a different process, the issue may be different.

A little research led me to conclude I was seeing scenario number 12 from this technote: http://www-01.ibm.com/support/docview.wss?uid=swg21470035

That scenario is that the fenced user id has a ulimit for data.

Resolving the Issue

Finding Fenced User ID

If you do not already know what your fenced user id is, you can determine it using any of these methods:

Method 1


==> cat /db2home/db2inst1/sqllib/ctrl/.fencedID
db2fenc1

In the above, ‘/db2home/db2inst1/’ would be replaced with the home directory of the DB2 instance owner.

Method 2


==> ps -ef | grep -i [db2]fmp
db2fenc1 10617056 65863810   0   Jan 10      -  0:00 db2fmp
 cogadmf 13631718 11599922   0   Jan 02      -  0:01 db2fmp
...

In this method, there may be many processes, and you can see that I have two DB2 instances on this server, so I get two fenced ids. The parent process id is the process id of db2sysc for the instance, so I could use that to map back which fenced id goes with which instance.

Method 3


==> db2pd -fmp
Database Member 0 -- Active -- Up 3 days 22:27:04 -- Date 2016-01-13-17.34.26.321410
FMP:
Pool Size:       11
Max Pool Size:   200 ( Automatic )
Keep FMP:        YES
Initialized:     YES
Trusted Path:    /db2home/db2inst1/sqllib/function/unfenced
Fenced User:     db2fenc1
...

This will output information about all of the fenced processes, so may be a long list – the fenced user is listed near the top.

Looking at ulimits

Once you know the fenced user, you want to login as that user or su to it. This will list the limits for the user:


$ ulimit -a
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         131072
stack(kbytes)        32768
memory(kbytes)       32768
coredump(blocks)     2097151
nofiles(descriptors) 2000
threads(per process) unlimited
processes(per user)  unlimited

In this case, the data limit is what is causing the problem.

Changing the ulimit

Depending on the division of responsibilities, you may only need to request that your System Administrator change the data limit for that user to unlimited. If you instead have access to root and should change it yourself, you can do this as root:


chuser data=-1 data_hard=-1 db2fenc1

After making this change, you will have to log in as the user again to see the changes. Always verify the changes took effect as expected.

If you have the DBM CFG parameter KEEP_FENCED set to YES (which is the default), you will need to stop and start the DB2 instance before the changes will take effect.

Note that all instructions here are for AIX because that is the OS where I ran into this issue.

The post ENOMEM by db2fmp in db2diag.log appeared first on Xtivia.

]]>
http://www.xtivia.com/enomem-by-db2fmp-in-db2diag-log/feed/ 1
XTIVIA Services Framework (XSF) – Update http://www.xtivia.com/xtivia-services-framework-xsf-update/ http://www.xtivia.com/xtivia-services-framework-xsf-update/#comments Sat, 05 Mar 2016 15:16:29 +0000 http://www.xtivia.com/?p=13300 XSF is a framework that XTIVIA has created (and used for multiple client engagements) that enables the rapid development of custom REST services in Liferay. REST services developed using XSF...

READ MORE

The post XTIVIA Services Framework (XSF) – Update appeared first on Xtivia.

]]>
XSF is a framework that XTIVIA has created (and used for multiple client engagements) that enables the rapid development of custom REST services in Liferay. REST services developed using XSF are coded in a fashion similar to JAX-RS or Jersey but can take advantages of Liferay features such as roles/permissions.

We have talked about XSF here before and just wanted to provide you with a brief update on its current status. We’ve now included the source code for the framework inside the XSF repository on GitHub. It is located in the ‘framework’ directory and can be built as a separate JAR; however you can still create your XSF based applications using either a Maven archetype or by using the version 1.1.0 JAR available on Maven Central.

Probably the best starting point is to read the PDF documentation for the framework, which can be found here.

We’ve also updated the current version of XSF to support Gradle builds (both the sample application as well as the framework itself). While the current version of XSF is still targeted to Liferay 6.2, Liferay has clearly committed to Gradle as its build tool of choice going forward in Liferay 7 and beyond. We really enjoy using Gradle and think you will too.

Speaking of Liferay 7, we are actively working on migrating XSF to Liferay 7 and we expect to have some exciting news to share in this space soon, so keep checking back here for updates.

The post XTIVIA Services Framework (XSF) – Update appeared first on Xtivia.

]]>
http://www.xtivia.com/xtivia-services-framework-xsf-update/feed/ 0
DB2 LUW Error Message: SQL0964C http://www.xtivia.com/db2-luw-error-message-sql0964c/ http://www.xtivia.com/db2-luw-error-message-sql0964c/#comments Thu, 03 Mar 2016 12:00:44 +0000 http://www.xtivia.com/?p=11982 Error Message SQL0964C The transaction log for the database is full. SQLSTATE=57011 or DB21034E The command was processed as an SQL statement because it was not a valid Command Line...

READ MORE

The post DB2 LUW Error Message: SQL0964C appeared first on Xtivia.

]]>
Error Message
SQL0964C  The transaction log for the database is full.  SQLSTATE=57011

or

DB21034E  The command was processed as an SQL statement because it was not a 
valid Command Line Processor command.  During SQL processing it returned:
SQL0964C  The transaction log for the database is full.  SQLSTATE=57011

In the DB2 diagnostic log, this error often shows up as DIA8309C:
Screenshot_122315_031849_PM

You can search for occurences of this issue in the DB2 diagnostic log using:

db2diag -e DIA8309C

When You Might Encounter This Error Message

This error message can occur with any statement that requires transaction logging – insert, update, import, etc.

What is Really Happening?

The active transaction logs for the database have become full. The disk that the transaction logs are on may or may not be full.

There are two primary scenarios with this error message. The first is that a transaction requiring more active log space than is available, given both the size of the transaction logs and the log utilization by other active connections currently executing. If this is the case, the transaction has likely filled up the transaction logs and rolled back.

The second is a scenario called log file saturation. This happens when a connection does something that requires logs without committing or rolling back, and is then idle for a long time – maybe even days. This scenario is somewhat more likely in a non-production environment. DB2 cannot release/archive that older log file until the transaction has committed or rolled back, so when it gets to the full size of LOGFILSIZ * (LOGPRIMARY + LOGSECOND) after that log record, it cannot allocate log files, even if all the files in between are completed and ready for archiving.

How to Resolve

If you have a single transaction that is eating up active log space, you need to address the transaction. Often it may be a large delete, but it may have other actions. The important thing to do is to break the transaction up into smaller pieces. This may involve breaking one DELETE up into many smaller DELETE statements, specifying a commitcount on an IMPORT, or taking other actions to break up the transaction and issue multiple commits. In some cases, you may want to consider increasing one of the logging parameters, but that is often a secondary solution for this problem rather than the primary preferred solution. Increasing logging parameters is more frequently an acceptable solution when a database has just recently gone live or recently seen a large increase in volume. When increasing logging parameters the frequency of log archives during normal activity should be taken into account.

If you have log file saturation, it likely will not clear itself up – a rollback will not be triggered for the problem connection. This means that you must find the problem connection and go force it off of the database. To find the application handle, you can parse the DB2 diagnostic log for ADM1823E. The results will look something like this:

$ db2diag -e ADM1823E
2015-11-12-09.24.28.952807-420 E90413E633            LEVEL: Error
PID     : 14896                TID : 140013098493696 PROC : db2sysc
INSTANCE: db2inst1             NODE : 000            DB   : SAMPLE
APPHDL  : 0-7                  APPID: REDACTED
AUTHID  : DB2INST1             HOSTNAME: REDACTED
EDUID   : 18                   EDUNAME: db2agent (SAMPLE)
FUNCTION: DB2 UDB, data protection services, sqlpgResSpace, probe:2860
MESSAGE : ADM1823E  The active log is full and is held by application handle 
          "0-7".  Terminate this application by COMMIT, ROLLBACK or FORCE 
          APPLICATION.

In this example, the application handle of the problem application is 7. You can also get this information using a snapshot:

$ db2 get snapshot for database on sample |grep oldest
Appl id holding the oldest transaction     = 7

Or using SQL against a monitoring table function:

$ db2 connect to sample

   Database Connection Information

 Database server        = DB2/LINUXX8664 10.5.5
 SQL authorization ID   = DB2INST1
 Local database alias   = SAMPLE

$ db2 "select APPLID_HOLDING_OLDEST_XACT from table(mon_get_transaction_log(-2))"

APPLID_HOLDING_OLDEST_XACT
--------------------------
                         7

  1 record(s) selected.

Once you have that application handle, it is important to see how long it has been idle to determine if it might be something that is still active, indicating you instead have a problem with multiple applications, transaction size, or with transaction log file size:

select uow_start_time
	, timestampdiff(4,current timestamp - uow_start_time) idle_minutes 
from table(mon_get_connection(7,-2))

UOW_START_TIME             IDLE_MINUTES
-------------------------- ------------
2015-12-23-15.33.04.031182           16

  1 record(s) selected.

In this case, 16 minutes makes it likely that I have to check very closely with the user or application owner before forcing off the connection.

If I determine that the connection should be forced, this is how to force it off:

$ db2 "force application (7)"
DB20000I  The FORCE APPLICATION command completed successfully.
DB21024I  This command is asynchronous and may not be effective immediately.

Always be very cautious in forcing off connections, as it causes the transaction to be rolled back, and can cause issues with some applications.

The post DB2 LUW Error Message: SQL0964C appeared first on Xtivia.

]]>
http://www.xtivia.com/db2-luw-error-message-sql0964c/feed/ 2
Warning: Custom code rendering Liferay web content might not be cached http://www.xtivia.com/warning-custom-code-rendering-liferay-web-content-might-not-be-cached/ http://www.xtivia.com/warning-custom-code-rendering-liferay-web-content-might-not-be-cached/#comments Sun, 21 Feb 2016 22:40:51 +0000 http://www.xtivia.com/?p=13138 If you have custom code for rendering Liferay web content, you may not know it but you may not be leveraging Liferay caching for the rendered web content, and this...

READ MORE

The post Warning: Custom code rendering Liferay web content might not be cached appeared first on Xtivia.

]]>
If you have custom code for rendering Liferay web content, you may not know it but you may not be leveraging Liferay caching for the rendered web content, and this may be at the root of some performance problems in your Liferay environment. Recently, one of our clients ran into this performance issue on one of their sites and I figured that I would share this issue and its solution with the Liferay community.

Oddly enough it seems the method that you’d assume you should use to get article content is badly behaved. The method in question is the JournalArticleLocalService’s getArticleContent method. This method should never be used in client code, as it automatically goes to the database to render your content request, entirely bypassing the cache.

Replace it with JournalContentUtil.getContent method. This first checks the Liferay cache. If the article is present in the Cache, it uses that. Otherwise, it delegates down to the JournalArticleLocalService getArticleContent method.

Make this change and your site should start to perform the way you and your end-users expect.

The post Warning: Custom code rendering Liferay web content might not be cached appeared first on Xtivia.

]]>
http://www.xtivia.com/warning-custom-code-rendering-liferay-web-content-might-not-be-cached/feed/ 0