× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hey all,
I work in a massive 400 shop that moves boo-koo data all over the place.
I have a process that runs on a host-400 and reads a DDM file that points to
the remote-400 file.
This process reads the remote-400 file and does minimal data massaging and
then just writes/updates the record read to a file on the host-400.
The objective of this process is to take 3.8 million records from the
remote-400 to the host-400 on a daily basis as quickly as possible.
Someone wants to apply a trigger on the host-400 receiving file that will
perform another record addition to another host-400 file when a record
addition occurs to the initial/original host-400 file.

I would like to know the following:
The 400's I am talking about are currently at V5R2 level and they are the
biggest ones available.
The main point is, I want to add a trigger program that will do the above
mentioned thing [add a record when another is added, or update a record when
another is updated] on the host-400 file.  Currently it is desirable to
continue the BLOCKING that occurs in this job.
Will the addition of the trigger cause the host-400 system to stop BLOCKING
when writing/updating the file[s], and thereby degrade performance
significantly?

I base my question on this.
The following is from an article where tests were done on a V3R4 machine.

"Record blocking is never used for insert operations when there's an insert
event trigger for the file. (Even if you've specified that blocking should
be used, DB2/400 disables it when the file is opened.)"

"Blocked vs. Unblocked I/O

When record blocking is available, it's one of the most effective ways to
speed up an application that performs sequential processing. (With record
blocking, the system passes a program a block, or buffer, of records for
processing rather than transferring one record at a time.) In the tests
reported in this article, as well as in other tests we've conducted or
reviewed, we've seen applications that use blocking execute substantially
more quickly than those that don't. The exact difference in throughput
between blocked and unblocked I/O depends on the file and application, but
for long-running sequential-processing applications, you should use blocking
whenever you can. ("SQL/400 vs. RPG IV: Which One's Faster?"
provides more information about the performance benefits of record
blocking.)

DB2/400 permits record blocking only when a file is opened as input-only or
output-only. As we mentioned earlier, when you open a file for input only,
triggers have no effect. When you open a file for update or delete
operations, blocking is never in effect, so triggers aren't an issue as far
as record blocking is concerned. However, when you open a file for output
only and there's a trigger associated with the insert event, DB2/400
disables blocking that otherwise might be used to improve performance."

Advice most appreciated!

         ~~~
         (0 0)     Tim Truax...
--ooO--(_)--Ooo--------------------------------------------


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.