If you run into LOG FULL condition in DB2 with SQL error code: SQL0964C, you need to increase the size of either of the following:

LOGPRIMARY, LOGSECOND or LOGFILSZ.

So, you change that even after that you again run into LOG FULL condition. What is happening?

You can consume lots of log space if you have a very large INSERT/UPDATE or DELETE in one transaction that requires to put lots of data in the DB2 logs. But, that is not the case and you are still running into log full case. The other situation in this could be that out of 100s of connections, you have one connection that is sitting idle and has not committed the transaction and in that case the amount of active log space used is determined by the oldest uncommitted transaction and it’s log record is still active which db2 will not remove from the active log path until that transaction commits.

If you have only enough space in your log directory for 50 log files then by the time you fill up log file and if that oldest transaction has not committed, then you will have filled up your active log path and transactions will fail due to insufficient log space.

There are of course solutions to this problem like specifying infinite logging and/or specifying the num_log_span database configuration parameter (which indicates the number of log files a single transaction is allowed to span before that transaction is rolled back), or the max_log database parameter (which indicates the percentage of active log that any one transaction can use up before it is rolled back).

But if you find you are using up too much active log then you can use this query to find out what user and application is holding up the tail of the active log if your database still supports SNAPSHOT views.

SELECT AI.APPL_STATUS as Status,
       SUBSTR(AI.PRIMARY_AUTH_ID,1,10) AS "Authid",
       SUBSTR(AI.APPL_NAME,1,15) AS "Appl Name",
       INT(AP.UOW_LOG_SPACE_USED/1024/1024) AS "Log Used (M)",
       INT(AP.APPL_IDLE_TIME/60) AS "Idle for (min)",
       AP.APPL_CON_TIME AS "Connected Since"
FROM   SYSIBMADM.SNAPDB DB,
       SYSIBMADM.SNAPAPPL AP,
       SYSIBMADM.SNAPAPPL_INFO AI
WHERE  AI.AGENT_ID = DB.APPL_ID_OLDEST_XACT
AND    AI.AGENT_ID = AP.AGENT_ID;

[Note: Thanks to Chris Eaton for the query.]

If you are in later releases of DB2 where SNAPSHOTS based views are discontinued, you can use the following query to find out the LOG HOG transaction particularly when you do not want to use MAX_LOG or NUM_LOG_SPAN parameters.

select a.workload_occurrence_state as status, 
             substr(a.session_auth_id,1, 10) as authid,
             substr(c.application_name, 1, 10) as applname, 
             int(a.uow_log_space_used/1024/1024) as logusedM, 
             timestampdiff(4, char(current timestamp - b.agent_state_last_update_time)) as idleformin, 
             c.connection_start_time as connectedsince 
from table(mon_get_unit_of_work(NULL,-2)) as a, 
     table(mon_get_agent(NULL,NULL,NULL,-2)) as b, 
     table(mon_get_connection(NULL, -2)) as c 
where a.application_handle = b.application_handle and 
      a.coord_member = b.member and 
      a.coord_member = a.member and 
      b.agent_type = 'COORDINATOR' and 
      a.uow_stop_time is null and 
      a.application_handle = c.application_handle and 
      a.coord_member = c.member and 
      b.event_type = 'WAIT' and 
      b.event_object = 'REQUEST' and 
      b.event_state = 'IDLE'
;

[Thanks to Scotts Walkty of Toronto Lab to provide the query.]

You can use a simple script to run this on a platform to run it at an interval and collect the output.

For example:

@echo off
setlocal ENABLEDELAYEDEXPANSION
SET LOGFILE=loghog.LOG


ECHO !TIME! Starting Log HOG Query >_z && type _z && type _z >> %LOGFILE%

set SLEEP15MIN=ping -n 5 127.0.0.1

echo db2 connect to im737id1 > NUL

FOR /L %%i IN (1 1 2160) DO (
  ECHO Start of Loop %%i at !DATE! !TIME! >_z && type _z && type _z >> %LOGFILE%
  echo db2 -tf loghog.sql  >_z && type _z && type _z >> %LOGFILE% 
  %SLEEP15MIN% > NUL
)

The above script will run the query at an interval of 5 seconds for 4 hours and the output goes in a file. The same script can be ported to AIX or LINUX in an easy fashion.

You could also write a shell script that will check if the IDLE_TIME is more than 30 minute then it will kill the connection who is a log hog connection.