× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Kurt

From a cursory glance at the docs for QDBBRCDS, and never having used it, this seems a reasonable approach. I have some thoughts about how it'd work.

1. Being an asynchronous bring, as Chuck pointed out, the records are brought into main memory independent of your job.
2. Depending on how much CPU processing you do per record, the records may or may not arrive before you read them.
3. The bring stops, apparently, with no indication it has done so, when an "invalid RRN" is found in your array. Chuck wondered if that is a deleted record - not clear.
4. If the bring stops prematurely, the reads to RRNs after the failure would be brought in the normal way - still asynchronous with faults but not pre-brought anymore
5. This makes me wonder if it'd be useful to have a job waiting on a data queue - that job's total purpose is to read records from a file, say 1000, with SETLL to a given RRN (actually might be better to use the C RLA functions - you could have a general-purpose NEP that could read the queue. Of course, cost of opening and closing the file might offset the performance gain. You'd write an entry on the data queue instead of running this API. This'd get around the "invalid RRN" issue, maybe.

Just thinking in an idle moment!! No promise that it'd work - untested except in the deep recesses of my mind!!

Vern

On 6/8/2010 9:14 AM, Kurt Anderson wrote:
Hi Mark,

I'm not finding much for IBM documentation on QDBBRCDS.

With your suggestion of accessing the QDBBRCDS API before the read, are you calling the API, are you doing something like the following:

DoU eof( File );
If rrnCount = 0;
buildRRNArray()
rrnCount = 1000;
QDBBRCDS( file: mbr: rrnArray: rrnCount );
EndIf;
Read File;
If not eof( File );
rrnCount -= 1;
// process stuff
EndIf;
EndDo;

-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of Mark S. Waterbury
Sent: Tuesday, June 08, 2010 8:34 AM
To: RPG programming on the IBM i / System i
Subject: Re: Speed in Reading

Hi, Bryce:

For really large files, such as your 8 GB transaction file, use the
QDBBRCDS API to do what is called "anticipatory paging", to bring in the
next few blocks of data, just prior to their use (when your application
actually reads the next block of records).

QDBBRCDS returns to your program immediately after initiating the
asynchronous "bring" request, so your application program can continue
to process the current block of records, while the "bring" is taking place.

Mark

> On 6/8/2010 8:20 AM, Bryce Martin wrote:
I like the idea and all, but in this case and others that might be similar
we are talking about putting a large object into a memory pool. Now, I
don't know what you guys consider a large object, and maybe I don't know
enough about memory pools (which is probably the case). But say you need
to process a 'big' file, i'm thinking on that is like 8gigs of data or
something...maybe an inventory transaction file... and you stock that
thing in memory won't you be eating up 8 gigs of your memory from the
system? Is that really a good thing to do?

I really don't know if that is how the memory pool works, but that is the
thought that came to my mind.


Thanks
Bryce Martin
Programmer/Analyst I
570-546-4777

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.