|
From what you have indicated, it appears that this will speed things up abit - and all files get cleared before the job starts, which will activate the file size allocation.
I have a very long-running conversion job stream which we are about
to kick off, and I'm wondering if it will save time if I change the
PFs to have an initial size that is at least as large as the number
of records that I know will be written to each file.
For example, I have an address master file named ADDRESSES which
ended up with almost 500,000 records during the last conversion.
Will it save much time if I do CHGPF(ADDRESSES) SIZE(500000 10000
100) ALLOCATE(*YES) so that it will have an initial allocation of
500,000 records once it is cleared, instead of letting it default to
SIZE(100000 10000 100) and the system having to do extents on it once
it hit the maximum size of 100,000 records?
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.