In the time it took to commute home I see multiple answers with lots of good
ideas.
All the reading of input is keyed (logicals) - there are 4 or 5 sqlrpgle for
the reading but still RPGIV for writing, and they are not having any
performance issues.
Some of the bigger programs may read from 20 or more files to pull in the
data needed to write.
There is some other activity that I am working to restrict on the system. By
tomorrows test, there will be fewer subsystems running, and basically no
other activity.
This is a single chassis, 2 partition (DEV & PROD), and each has its own
specific drives allocated.
I have thought of dropping the indexes although some programs are definitely
reading the output data by key in a later program. Most have a single key or
even none, but some up to 10 logicals.
The splitting of the "monolithic" job into separate parts - will explore
that.
Through WRKSYSACT and WRKACTJOB often the job is running at "165%" or
similar. Never quite got that..
It is running in the normal batch subsystem, and about 65% of the memory
allocated to that (this is a DEV box and few interactive and very little
printing.
The WRKDSKSTS shows all disks near equal around 22% busy, all about 82%
full.
No journals (on the output), no commitment control.
Will check FRCRATIO and %trimr.
Also starting to look at loading arrays or data structures instead of
reading master file records over & over.

One issue I'd like to ask - whether actions like lowering the job priority
from 50 down to 25, and a very large timeslice (50000) can help or create
issues (all of which could be tested), but he keeps insisting it helps...?

Jim Franz



-----Original Message-----
From: RPG400-L [mailto:rpg400-l-bounces@xxxxxxxxxxxxxxxxxx] On Behalf Of
Kevin Wright
Sent: Tuesday, March 19, 2019 6:54 PM
To: RPG programming on IBM i <rpg400-l@xxxxxxxxxxxxxxxxxx>
Subject: RE: RPGIV performance writing large number of records

As mentioned elsewhere, look at what the bottleneck is rather than guess.

Is the conversion process run in one big job, or several jobs sequentially
or several jobs in parallel?

If several jobs sequentially, can they be run in parallel?

If one big job, can it be split into smaller jobs and those run in parallel?

What is the ratio of CPU seconds to elapsed seconds for the conversion
process, or each job in the conversion process?

This will give you a crude idea of whether the conversion process is CPU
bound or not.

I would guess not :) .

Having said that, to speed up disk access, in no particular order:
. Ensure that the sizing of the output files is such that they don't get
extended during the conversion process.
. Pre-allocate the expected size of the output files prior to the conversion
process.
. If any input files are being accessed sequentially and are not already
blocked, override them to be optimally blocked.
. Remove all logical view members over the output files. Re-add them after
the conversion programs are finished.
. If the output files are not uniquely keyed, block them to be optimally
blocked.

More generally, although the first may also speed up disk access:
. Is anything else running in the same memory pool? Try to get exclusive use
of a large memory pool to run in.
. Is anything else running in the same subsystem? If possible try to get it
shut down.
. Is anything else running on the partition? If possible try to get it shut
down.

Possible source change:
. Are the input files being accessed sequentially via key? Do they really
need to be? Removing the key in the file spec is a fairly minimal change if
it is not really needed.

Kevin Wright

-----Original Message-----
From: RPG400-L <rpg400-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of Jim Franz
Sent: Wednesday, 20 March 2019 9:03 AM
To: rpg400-l@xxxxxxxxxxxxxxxxxx
Subject: RPGIV performance writing large number of records

Am reviewing performance issues of a conversion process to read data from
many files and write many records (hundreds of thousands or millions of
records). 30 or 40 programs, of which 4 are very complex and taking most of
the time, All RPGIV, all write file as "output", not "update add". About
half write to files allowing null values, and not having errors (using
%nullind).
The process is taking longer to execute than the 12 hr window (their orig
estimate) to get it done - by 2 - 3 times. The pgmr is very experienced (in
RPGIV) so its not simple stuff (but am checking).
Not an option to rewrite at this point, but looking for tips on allocation
of resources.
This is V7R1, single processor power 7+ w 16 gig memory, 1.5Tb local disk.

Jim Franz
--
This is the RPG programming on IBM i (RPG400-L) mailing list To post a
message email: RPG400-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or
change list options,
visit: https://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/rpg400-l.

Please contact support@xxxxxxxxxxxx for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com


This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2020 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].