× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: File Reads
  • From: Larry Bolhuis <lbolhui@xxxxxxx>
  • Date: Mon, 28 Sep 1998 22:31:46 -0400
  • Organization: Arbor Solutions, Inc

VENU,

  This then is an extension to my earlier reply to your earlier
question.  

  You have correctly realized that paging is a potential problem (Which
you would probably get if you blocked 32K!)  Realistically the file size
should not affect the blocking size.  Record length does have an affect
as you realize since bigger records consume more memory.

  Remember to specify a number of records in the SEQONLY parm.  NBRRCDS
gets the data from the disk to memory in blocks, while SEQONLY gets them
from the DB (in memory because of NBRRCDS) to the program.  In many
cases it is the SEQONLY that saves big CPU time while SEQONLY saves
I/O's.

  Also during the processing of this job be sure to run it in a memory
pool where the paging is set to *CALC and there is plenty of memory.

  Larry Bolhuis
  Arbor Solutions, Inc
  lbolhui@ibm.net

  
VENU YAMAJALA wrote:
> 
> Hi All,
> 
> This is an extension to my earlier question. We are planning to use the
> blocking factor for the datafile. Before calling our main RPG (or the main
> pgm), we will do a OVRDBF of the datafile with SEQONLY(*YES) NBRRCDS(xxx). But
> we want to have a parameter in the NBRRDS value instead of hard coded value.
> This way we can control the blocking factor based on file size. We came up 
>with
> a formula as :
> 
> for CISC :   (32 * 1024) / RecordLength
> for RISC :  (64 * 1024) / RecordLength
> 
> round it off to the nearest lower value. We tried this with a file of 1.2
> milliion records and the record length is 118. The blocking factor we got is
> 272. It took approx 15 mins to process 40000 records. Earleir it took more 
>than
> an hour to process 60000 records. This means we are gaining some improvement.
> But still we are a little bit doubtful abt the formula. Is there anyway, we 
>can
> caluclate the blocking factor differently and more efficiently? We can hard
> code the blocking factor as 32367 records (max limit), but this might cause
> some paging problems. So we wanted to calculate the blocking factor for each
> file at run-time. Any ideas or thoughts??
> 
> Thanx in advance for any help. Good day.
> 
> Rgds
> Venu
> +---
> | This is the Midrange System Mailing List!
> | To submit a new message, send your mail to MIDRANGE-L@midrange.com.
> | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
> | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
> | Questions should be directed to the list owner/operator: david@midrange.com
> +---
+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.