× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi, Glenn:

With row-level triggers, the DB2 database will have to issue a full EXTERNAL program call for each record, to call the *PGM of the trigger.

It is known that full external program calls are much more expensive in terms of overhead than internal procedure calls, and so it is easy to imagine where some of this extra overhead comes from.

As others have mentioned, one way to minimize the impact on the applications using the tables or files that have triggers on them is to keep the triggers as short as possible and have them simply write the relevant information to a data queue, and then you can have one or more programs waiting on the data queue to service any requests that come in. In this way, you shift much of the overhead from the sequential path that is part of the WRITE or UPDATE to the original table or file, over to a separate "server job".

This technique of using data queues in this way has been around since System/38 CPF and those programs running as a server job were often called (colloquially) a "never ending program" (or NEP).

I hope that helps,

Mark S. Waterbury

> On 4/27/2015 9:36 PM, Glenn Gundermann wrote:
Hi all,

I have one SQL AFTER INSERT trigger program and one SQL AFTER UPDATE
trigger program and I'm not happy with the performance.

Does anyone know of any published numbers for what is considered acceptable
performance statistics?

I've tried inserting and updating anywhere from 500 to 12,000 rows at a
time.
I averaged 12 insert test times and 12 update test times, excluding the
first test in each batch, which is always higher, to produce the following
results:

When inserting rows, the avg elapsed times are:
without trigger: 1.193 secs / thousand
with trigger: 36.151 secs / thousand
A factor of 30x slowdown.

When updating rows, the avg elapsed times are:
without trigger: .482 secs / thousand
with trigger 32.804 secs / thousand
A factor of 68x slowdown.

I find the above factors of slowdown appalling.

The purpose of the triggers is to write the incoming information to an
interface table.
The table that has the triggers active has 422 columns. The interface
table will receive all 422 columns plus additional information to make a
total of 541 columns.
Of the 541 columns, I'm calling a UDF for 107 of them. My next task is to
see if the UDFs can be improved.

I'm not sure what else could be slowing the process down.
The interface table has a generated identity column. NO CACHE or CACHE 200
makes no difference. I haven't tried it without the identity column since
I really want this but it would be interesting to see the difference.

I'm using this method:
REFERENCING OLD TABLE AS OLD_ITEM
NEW TABLE AS NEW_ITEM
FOR EACH STATEMENT
MODE DB2SQL

I tried this method, which was much slower (avg 74 secs/thousand):
REFERENCING OLD ROW AS OLD_ITEM
NEW ROW AS NEW_ITEM
FOR EACH ROW
MODE DB2ROW

I'm wondering what can be done to accomplish the same end result and have
the least impact on the application doing the update/insert of the master
table.

If I first write the information to an intermediate table first and have a
trigger on this table to execute the UDFs and write to the interface table,
would the application still be waiting for everything to be complete?

Is there any performance monitoring that can be done for triggers?

Any suggestions would be most welcome.

Thank you.

Yours truly,

Glenn Gundermann
Email: glenn.gundermann@xxxxxxxxx
Work: (416) 675-9200 ext. 89224
Cell: (416) 317-3144


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.