× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: subfile processing: SFLSIZ=SFLPAG (was: readc & sflnxtchg)
  • From: david.kahn@xxxxxxxxxxxxxxxxxx
  • Date: Wed, 17 Feb 1999 11:26:14 +0000

Bob Larkin <blarkin@wt.net> wrote:

>A simple trick for handling subfiles is two define duplcate subfiles. For
a
>position to a prior record (Page UP), find the starting record, load the
first
>page, then read the first subfile by RRN, writeing to subfile B. Simple
code to
>determine if A or B is the current SFL to use.

I've thought of doing this in the past, but I've never experimented with
it. Firstly having 2 subfiles doing the job of 1 is probably going to
confuse the next programmer to pick up the code (let alone poor old me).
Secondly it requires you to spin through and copy the entire subfile with
every rolldown and is therefore not terribly efficient. Thirdly you run
into problems when your rolldown is near the beginning of the file.

For example, we have SFLPAG=12 and  subfile A currently contains data
records 5 through 16. When rolldown is used the program determines that
subfile B needs to be loaded from record 1. We therefore load 1 through 12
into subfile B, then if we're not careful we will blindly copy A to B
giving us records 5 through 16 on page 2. We therefore have records 5
through 12 loaded twice. :-(

So we need some kind of mechanism to prevent the overlap. The program
realises it only needs to load 4 records from data into subfile B, then it
copies 5 through 16 from A. Now when the user presses rollup he gets page 2
with only 4 lines loaded (13 through 16). It looks like the subfile's at
end but the "more" prompt shows and rollup is still enabled. :-( So now we
need also a mechanism to tell the program to complete the last page before
displaying. Wow.

And fourthly, couldn't the whole thing be done with a single subfile
anyway, by shuffling everything along? You still have to be careful not to
load duplicates on page 2 or leave gaps on the last page.

>Another technique I learned from Harry Hiles was to write duplicates of
the
>records in a Data Que using APIs. A little cumbersome the first time you
write
>it, but it is easy to clone, and provides very robust subfile handling.

This is interesting. Presumably the queue is LIFO and we load it from the
bottom of the subfile to the top? On rollup we pop records from the queue.
If it's empty we go back to loading from the data. This eliminates the last
page problem. When we're reading back through the data to find the new top
record we can push those into the queue too. We then clear  the subfile and
load page 1 from the queue eliminating the page 2 duplicates problem too.
Neat.

However, we still have the overhead of having to spin through the entire
subfile - a workload that increases with every subsequent rolldown. Imagine
a user starting near the end of the data and then holding down the page up
key. We could probably gain something, too, by replacing the data queue
with a simple direct file and avoiding the API calls. After all a data
queue is usually for passing data from one program to another.

On balance, I think I'll stick with the array of off-subfile settings. We
maintain the subfile in the normal way, but as we load each record we check
for an array match. If there's a match we plug back in the user's
selections and any conditioning indicators that were set. Although it's
messy we drastically reduce the number of reads and writes in the worst
cases. Unfortunately we're also arbitrarily limited in the number of
off-subfile settings we can support by the size of the array. Maybe a user
index is the answer?

I'd better stop before my brain overheats. :-)

Thanks for the excellent ideas, Bob. I'm sure you'll put me straight if my
analysis is faulty.

Dave Kahn, ABB Steward Ltd.


+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.