|
Several of them will end up having millions of records.
I have captured the current file size information from the last run,
so I was planning to use CHGPF to change the size on each file to
have the number of records of the current file + 10 %, along with
with ALLOCATE(*YES).
From what you have indicated, it appears that this will speed things
up a bit - and all files get cleared before the job starts, which
will activate the file size allocation.
"CRPence" wrote:
On 15-Feb-2012 15:29 , sjl wrote:
I have a very long-running conversion job stream which we are
about to kick off, and I'm wondering if it will save time if I
change the PFs to have an initial size that is at least as large
as the number of records that I know will be written to each
file.
For example, I have an address master file named ADDRESSES which
ended up with almost 500,000 records during the last conversion.
Will it save much time if I do CHGPF(ADDRESSES) SIZE(500000
10000 100) ALLOCATE(*YES) so that it will have an initial
allocation of 500,000 records once it is cleared, instead of
letting it default to SIZE(100000 10000 100) and the system
having to do extents on it once it hit the maximum size of
100,000 records?
<<SNIP>>
The best bet is [as asked] to avoid both types of extents by
choosing a near\known final-size with ALLOCATE(*YES); i.e. the
"wasted" storage for rows is by your own choice vs an algorithm.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.