× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



As mentioned elsewhere, look at what the bottleneck is rather than guess.

Is the conversion process run in one big job, or several jobs sequentially or several jobs in parallel?

If several jobs sequentially, can they be run in parallel?

If one big job, can it be split into smaller jobs and those run in parallel?

What is the ratio of CPU seconds to elapsed seconds for the conversion process, or each job in the conversion process?

This will give you a crude idea of whether the conversion process is CPU bound or not.

I would guess not :) .

Having said that, to speed up disk access, in no particular order:
. Ensure that the sizing of the output files is such that they don't get extended during the conversion process.
. Pre-allocate the expected size of the output files prior to the conversion process.
. If any input files are being accessed sequentially and are not already blocked, override them to be optimally blocked.
. Remove all logical view members over the output files. Re-add them after the conversion programs are finished.
. If the output files are not uniquely keyed, block them to be optimally blocked.

More generally, although the first may also speed up disk access:
. Is anything else running in the same memory pool? Try to get exclusive use of a large memory pool to run in.
. Is anything else running in the same subsystem? If possible try to get it shut down.
. Is anything else running on the partition? If possible try to get it shut down.

Possible source change:
. Are the input files being accessed sequentially via key? Do they really need to be? Removing the key in the file spec is a fairly minimal change if it is not really needed.

Kevin Wright

-----Original Message-----
From: RPG400-L <rpg400-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of Jim Franz
Sent: Wednesday, 20 March 2019 9:03 AM
To: rpg400-l@xxxxxxxxxxxxxxxxxx
Subject: RPGIV performance writing large number of records

Am reviewing performance issues of a conversion process to read data from
many files and write many records (hundreds of thousands or millions of
records). 30 or 40 programs, of which 4 are very complex and taking most of
the time, All RPGIV, all write file as "output", not "update add". About
half write to files allowing null values, and not having errors (using
%nullind).
The process is taking longer to execute than the 12 hr window (their orig
estimate) to get it done - by 2 - 3 times. The pgmr is very experienced (in
RPGIV) so its not simple stuff (but am checking).
Not an option to rewrite at this point, but looking for tips on allocation
of resources.
This is V7R1, single processor power 7+ w 16 gig memory, 1.5Tb local disk.

Jim Franz

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.