× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 12/23/05, Robert Cozzi <cozzi@xxxxxxxxx> wrote:
>
> Isn't that the number of records for sequential "get" operations? Not a
> block size?  Year ago, I remember Dick Bains mentioning that using the
> NBRRCDS() parm with the number of records set to the record count of the
> file was a way to cache the file into storage.


I have seen the NBRRCDS() parm, and I have tried to use it, but for some
reason, it doesn't seem to make the READs read more records in the buffer.
I am usually trying to read from a file in key sequence (which makes
SEQONLY() _sound_ like the wrong parm to use).  If I use SEQONLY, I can view
the number of records read as the job is running, and see that it is
grabbing more records at a time.  If I use NBRRCDS, it only grabs the
default number of records.

I just wish that, for Christmas one year, we had a way to specify on the
F-spec, how many records to read into the buffer on each READ.  It could be
as simple as

     FDuje_log  IF   E           K Disk    Block(*Yes 1026)

--
"Enter any 11-digit prime number to continue..."
"In Hebrew SQL, how do you use right() and left()?..." - Random Thought
"If all you have is a hammer, all your problems begin to look like nails"

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.