×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
We have an RPGLE program that I have been asked to evaluate for performance
issues. I want to propose some changes to it but I'm having a hard time
performance testing the changes to prove my theory. Therefore, I thought I
would ask the group some questions and make sure that I am not off track or
misinformed, especially the logic behind the first change that I want to
make.
The first change.
The program currently uses native RPG reads (FSpec). There are 7,328,305
records in the driver file which contains 174 fields for a total of 1,172
bytes. Of those 174 fields, the program only uses 18 of them for a total of
125 bytes. It is my understanding when using SQL that if I build a cursor
selecting these 18 fields, that is the only data the system reads
(generically speaking) vs reading all 1,172 bytes and extracting out the 125
bytes that it needs so changing to SQL would eliminate reading 1,047 bytes
per record or about 7.6G.
The second change.
As it reads the 7,328,305 records, it is reading by the key and accumulating
totals. When the key changes, it dumps the totals to a summary file, clears
the arrays, and accumulates for the next key. Would changing this to SQL
using SUM() and GROUP BY make this faster?
I've left out some details but bear with me and look at the what I am asking
and don't try to figure out the link between the above two changes and this
third change. It is all linked together but I didn't provide the details of
how.
The third change.
There are 22 arrays in the program for the rolling totals.
The total of a single occurrence of the arrays is 318 bytes. Each array is
dim(32,767) for a total of 10,419,906 bytes. Since there are a maximum of
5,000 index values, I am going to change the dim(32,767) to dim(5,000).
That reduces the memory consumption of the program by about 8M. This is on
a huge machine and I don't know how much memory it has or how it is
configured but I'm sure it has a lot. Generally speaking, could a program
requiring 10,419,906 bytes of memory also be a performance hog even if there
was ample memory in the subsystem?
Thanks in advance for all of your help and knowledge.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.