× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: "extract" command enhancement (subfiles and BIFs)
  • From: "James W. Kilgore" <qappdsn@xxxxxxx>
  • Date: Thu, 23 Sep 1999 09:31:05 -0700
  • Organization: Progressive Data Systems, Inc.

Dan,

Yes we load the whole subfile (but not necessarily the whole data base).  Years 
ago
we did page at a time to give the false illusion of response time.  We also
monitored what was going on and found out that -most- subfile usage displayed 
500 or
fewer records.

With all the inherent problems of page at a time mentioned on this thread, we 
opted
to change our standard template.  Sure we still get a user that insists on 
doing a
customer search by just typing a single letter and they sit and wait and max 
out the
subfile.  When that occurs we send a status message telling them to refine their
search.  (we also capped the search to 2000 entries)

It's the filtering of the data base that consumes the time, so that's where our
efforts are placed.  So in the customer search example, we started out using an
OPNQRYF as a filter mechanism, now we have a keyword in context file that we 
can do
a SETLL/READ on instead and that helped a whole lot more then page at a time 
ever
could.

IMHO, it would be great if subfiles could be keyed (like data queues) and one 
could
use the SETLL/READ on them to set the page to be displayed.  But I'm not 
holding my
breath <g>

For those that have not used binary halving, here's a little primer.
Let's say you have a subfile (or ordered array) with 600 entries and you want to
"position to"

To start the process, you get the key value at records 1 and 600 so you can 
test if
the request is within the range of entries.
If it's not, you deal with it and bypass the whole halving process.

You take the high number (600) and the low number (1) and find the midpoint 
(300)
You get the value at 300 and compare to your search argument.

At this point you know that the result is in either the first or last half of 
the
subfile/array.  If you happen to hit it right on, set the subfile record number 
or
array index and leave the loop.
(For this example let's say it's in the upper half)

Now that you have made that determination, you go back to the top of the loop 
and
use 300 as your low number, 600 as the high number and find the midpoint (450)
You get the value at 450 and compare to your search argument.

Each iteration of the loop "halves" the remaining possibilities.  In the 
example I
posted, we stop the process once the spread between low/high numbers is reduced 
to
the subfile page size (usually 10) and start a simple looping chain/compare.

HTH


Dan Bale wrote:

> James,
>
> >
> >I've included a code snippet, hopefully it's self explanatory.
>
> Well, not quite. <g>  Are you loading the subfile with the entire database?

+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.