× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



My recollection is that ADDPFM, not CLRPFM, will activate the allocation attribute; I seem to recall being /informed/ that my prior assumption [false recollection] that CHGPF SIZE() ALLOCATE(*YES) was done synchronously, was incorrect, generally. If the ACCPTHSIZ() value is modified or the SRCFILE() is specified, then the change will occur with the CHGPF; i.e. the ALTER or ALTER-like /change/ code path will re-create the existing members, so the attribute is activated by the "create" [aka "add"] of the altered member.

Be sure to test the assumption that the clear will effect the desired; if not, best use ADDPFM, or change the accpth size also when the file is empty [or anytime when non-keyed]. INZPFM to initialize the member with deleted record up to the next increment is also an option, and enables the more concurrent inserts via reuse if the implementation of the /conversion/ might benefit.

Regards, Chuck

On 15-Feb-2012 19:38 , sjl wrote:

Several of them will end up having millions of records.

I have captured the current file size information from the last run,
so I was planning to use CHGPF to change the size on each file to
have the number of records of the current file + 10 %, along with
with ALLOCATE(*YES).

From what you have indicated, it appears that this will speed things
up a bit - and all files get cleared before the job starts, which
will activate the file size allocation.

"CRPence" wrote:

On 15-Feb-2012 15:29 , sjl wrote:
I have a very long-running conversion job stream which we are
about to kick off, and I'm wondering if it will save time if I
change the PFs to have an initial size that is at least as large
as the number of records that I know will be written to each
file.

For example, I have an address master file named ADDRESSES which
ended up with almost 500,000 records during the last conversion.

Will it save much time if I do CHGPF(ADDRESSES) SIZE(500000
10000 100) ALLOCATE(*YES) so that it will have an initial
allocation of 500,000 records once it is cleared, instead of
letting it default to SIZE(100000 10000 100) and the system
having to do extents on it once it hit the maximum size of
100,000 records?


<<SNIP>>

The best bet is [as asked] to avoid both types of extents by
choosing a near\known final-size with ALLOCATE(*YES); i.e. the
"wasted" storage for rows is by your own choice vs an algorithm.


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.