× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: subfile processing: SFLSIZ=SFLPAG (was: readc & sflnxtchg)
  • From: Bob Larkin <blarkin@xxxxxx>
  • Date: Wed, 17 Feb 1999 21:34:55 -0800

Actually, there is a simple solution. When you "PageUp/RollDown" to a record
prior to the starting point, you simple use NEGATIVE SUBFILE RELATIVE RECORD
NUMBERS!! It's really a simple concept, you just subtract 1 instead of adding. 
Oh
wait a minute, I forgot which dimension I'm in. Must have slipped into a 
parallel
dimension. Actually, if IBM would consider this, it would make subfile 
processing
mucho simpler. Technically, it seems like it wouldn't be too much work. We would
need to specify the starting RRN as -9999 to 9999. Also, we would need a keyword
to indicate that the subfile used negative SFLRRN, or the original scheme which
would be 1 - 9999.

Any personal thoughts from the IBM camp?
Bob

david.kahn@gbwsh.mail.abb.com wrote:

> Bob Larkin <blarkin@wt.net> wrote:
>
> >A simple trick for handling subfiles is two define duplcate subfiles. For
> a
> >position to a prior record (Page UP), find the starting record, load the
> first
> >page, then read the first subfile by RRN, writeing to subfile B. Simple
> code to
> >determine if A or B is the current SFL to use.
>
> I've thought of doing this in the past, but I've never experimented with
> it. Firstly having 2 subfiles doing the job of 1 is probably going to
> confuse the next programmer to pick up the code (let alone poor old me).
> Secondly it requires you to spin through and copy the entire subfile with
> every rolldown and is therefore not terribly efficient. Thirdly you run
> into problems when your rolldown is near the beginning of the file.
>
> For example, we have SFLPAG=12 and  subfile A currently contains data
> records 5 through 16. When rolldown is used the program determines that
> subfile B needs to be loaded from record 1. We therefore load 1 through 12
> into subfile B, then if we're not careful we will blindly copy A to B
> giving us records 5 through 16 on page 2. We therefore have records 5
> through 12 loaded twice. :-(
>
> So we need some kind of mechanism to prevent the overlap. The program
> realises it only needs to load 4 records from data into subfile B, then it
> copies 5 through 16 from A. Now when the user presses rollup he gets page 2
> with only 4 lines loaded (13 through 16). It looks like the subfile's at
> end but the "more" prompt shows and rollup is still enabled. :-( So now we
> need also a mechanism to tell the program to complete the last page before
> displaying. Wow.
>
> And fourthly, couldn't the whole thing be done with a single subfile
> anyway, by shuffling everything along? You still have to be careful not to
> load duplicates on page 2 or leave gaps on the last page.
>
> >Another technique I learned from Harry Hiles was to write duplicates of
> the
> >records in a Data Que using APIs. A little cumbersome the first time you
> write
> >it, but it is easy to clone, and provides very robust subfile handling.
>
> This is interesting. Presumably the queue is LIFO and we load it from the
> bottom of the subfile to the top? On rollup we pop records from the queue.
> If it's empty we go back to loading from the data. This eliminates the last
> page problem. When we're reading back through the data to find the new top
> record we can push those into the queue too. We then clear  the subfile and
> load page 1 from the queue eliminating the page 2 duplicates problem too.
> Neat.
>
> However, we still have the overhead of having to spin through the entire
> subfile - a workload that increases with every subsequent rolldown. Imagine
> a user starting near the end of the data and then holding down the page up
> key. We could probably gain something, too, by replacing the data queue
> with a simple direct file and avoiding the API calls. After all a data
> queue is usually for passing data from one program to another.
>
> On balance, I think I'll stick with the array of off-subfile settings. We
> maintain the subfile in the normal way, but as we load each record we check
> for an array match. If there's a match we plug back in the user's
> selections and any conditioning indicators that were set. Although it's
> messy we drastically reduce the number of reads and writes in the worst
> cases. Unfortunately we're also arbitrarily limited in the number of
> off-subfile settings we can support by the size of the array. Maybe a user
> index is the answer?
>
> I'd better stop before my brain overheats. :-)
>
> Thanks for the excellent ideas, Bob. I'm sure you'll put me straight if my
> analysis is faulty.
>
> Dave Kahn, ABB Steward Ltd.
>
>
begin:          vcard
fn:             Bob Larkin
n:              Larkin;Bob
org:            <A HREF="HTTP://web.wt.net/~blarkin/">Larkin Computer Consulting</A>
adr:            <A HREF="http://web.wt.net/~blarkin/">Bob and Diana's Page</A>;;;Houston;TX;<A HREF="http://web.wt.net/~blarkin/">;United States
email;internet: blarkin@wt.net
title:          Systems Consultant
x-mozilla-cpt:  ;4104
x-mozilla-html: FALSE
version:        2.1
end:            vcard


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.