×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
James Newman, CDP wrote:
I'm on a V5R2 machine and rewriting a program that currently uses 60 element
arrays when processing records. As part of the processing I have to move
element 59 to element 60, 58 to 59, etc and put the newly read record in
element 1. A friend (thanks Marshall) suggested that instead of using
arrays, define the arrays as data structures and move the data structure
instead. This should be much faster. But do I have to define each element
as part of the larger data structure? I guess there's no "OCCURS X TIMES"
in RPG, eh?
James:
If I was rewriting, I probably wouldn't exactly "move" anything. I
might have a 61-element array and overlay it with a BASED 60-element
array. The 60-element array would initially be based on element 2 of
the 61-element array. Element 1 would effectively be 'empty' or
simply initialized with default values.
At some point, a newly read record would go into element 1, and I'd
then set the basing pointer to point to element 1 instead of element 2.
There should be no need to "move" the array data at all. Just change
the address.
In any case, the real first question to ask is simply "Does it need
to be faster?"
If you do this once because you now have to store a new record, and
there's only one new record when the job runs, it wouldn't seem to
matter much. But if this is just one instance in a sequence of many
many records...?
Also, does the array need to shift by 1 every time a new record is
read and there will be multiple reads? That is, does the new element
1 need to become element 2 on the 2nd READ? ...the 3rd READ? ...many
READs in a row?
And then, what happens to the 60 elements?
The number of elements could suggest a performance gathering
mechanism that updates at 1-second or 1-minute intervals. The array
might hold rolling stats for the latest minute or hour. Or maybe the
array holds monthly sales figures for a rolling 5-year period and
it's updated at each month-end.
Is this a memory-resident array that needs to be sensitive to
speed/performance? Or is this building a set of one (or 60?)
database records? A "1-second" process is obviously might need more
speed than a "1-month" process.
Tom Liotta
As an Amazon Associate we earn from qualifying purchases.