I neglected to send up update last night.
I changed the program (rewrote it really) to use SQL to do the summing with
group by and eliminated the humungous arrays. It now runs consistently
under 5 minutes.
I leaning toward the arrays being the bigger culprit. After digging a
little deeper to understand the code, it was clearing the 22 arrays with
most being 15p4 and all dim at 32767 (totaling about 10,419,906 bytes)
almost 10,000 times. Some day I'm going to play with that concept and see
how much time that really takes. Meanwhile, I am in the testing/validation
Thanks everyone for your input. Even things that weren't applicable this
time will probably ring a bell down the road and help me solve a future
From: RPG400-L <rpg400-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of D*B
Sent: Friday, October 16, 2020 4:19 AM
To: RPG programming on the IBM i / System i <rpg400-l@xxxxxxxxxxxxxxxxxx>
Subject: RE: Performance question of RPG native read vs SQL
Unfortunately, the entire driver file is needed to build the summary file.
The driver is a transaction based file. It is rolled up to the summary level
so they don't have to chug through the transactions every time. There is no
selection. Only a top to bottom keyed read.
... if you could bring it to a bulk statement like create table ... as
insert into ... select ...
this would beat the row by row processing, regardless wether you are using
RLA or SQL to do this.
even faster versions could be done with parallelism (if your ressources are
If your process is doing additional logic, first step would be to analyze
where thr most time in consumed.
This is the RPG programming on IBM i (RPG400-L) mailing list To post a
message email: RPG400-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or
change list options,
or email: RPG400-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
Help support midrange.com by shopping at amazon.com with our affiliate link: