|
Hmmm... quite a few different ways to deal with this pop into mind.
1) You could set up a loop that reads the whole detail file,
and for each record CHAIN to the header file. Create an
array or Mult-Occurance Data Structure from the data that
you want to put in your subfile. Once you've read all of
the records, sort this array or MODS. (You can sort a MODS
using the QLGSORT API) Then use this data to load your
subfile.
2) You can do the same thing as #1 but use a user space instead
of a MODS. You could use a BASED data structure to make
reading the data easier. Of course, this might not be a
good suggestion for someone who doesn't like pointers :)
3) If the maximum size of your detail file is really huge, you
might consider using a user-index instead. This would
allow some REALLY big sets of data, and also would take
care of the sorting for you.
4) If you're not an API lover, you could do pretty much the same
thing as #2 suggests by creating a temporary file in QTEMP
that contains the data that you wanted.
5) Of course, the simplest way -- and the way that'll have the
fastest run-time response, would be to put the alphabetic
data in your detail file as the detail file is generated,
and then simply make a logical file by this data. The
only part of the data that you'd need is the data you want
things sorted by, the rest you could get by CHAINing back
with the "unique key value" that you're already storing...
But if the data is very dynamic, or takes up a lot of space,
this may not be a goos solution...
As I say fairly often when replying to people in this group, an awful
lot depends on the specifics of your circumstances... You really
need to know whats going to work best for YOU. But, reading the
entire logical off of the header records sounds awfully slow to me.
boothm@earth.goddard.edu (by way of David Gibbs <david@midra wrote:
> I have an idea I may have missed something obvious.
>
> Here's the situation: I have a Header File of names and addresses.
> The
> Header File has a unique identifying number for each name. The
> numbers
> are assigned as each new name is added to the file so the numbers
> really
> amount to arrival sequence, not to Alphabetic Sequence. I do have
> logical over the Header File based upon the name, so that any report
> can
> be generated in alphabetic order. All this seems simple and straigh
> forward.
>
> Now comes the part where I think I am missing something: I also hav
> an
> application with a small detail file for a specialized application.
> The
> small group is probably going to be about 300 records but certainly
> never
> over 1,000 records. I've linked it logically with the Header File b
> the
> unique Header File I.D. number. To present an alphabetic subfile on
> the
> screen of that small grouping I have to read through the entire alph
> logical on the Header File. As you can imagine performance is horri
>
> In the past I've loaded an *INZSR array, and I've denormalized the
> data.
> Neither method has felt right. Is there some simple thing I've
> missed?
>
> Thanks.
> _______________________
> Booth Martin
> boothm@earth.goddard.edu
> http://www.spy.net/~booth
> _______________________
+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.