× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Chuck -

Several of them will end up having millions of records.

I have captured the current file size information from the last run, so I was planning to use CHGPF to change the size on each file to have the number of records of the current file + 10 %, along with with ALLOCATE(*YES).

From what you have indicated, it appears that this will speed things up a
bit - and all files get cleared before the job starts, which will activate the file size allocation.

Regards,
Steve




"CRPence" wrote in message news:mailman.2261.1329356988.29960.midrange-l@xxxxxxxxxxxx...

On 15-Feb-2012 15:29 , sjl wrote:
I have a very long-running conversion job stream which we are about
to kick off, and I'm wondering if it will save time if I change the
PFs to have an initial size that is at least as large as the number
of records that I know will be written to each file.

For example, I have an address master file named ADDRESSES which
ended up with almost 500,000 records during the last conversion.

Will it save much time if I do CHGPF(ADDRESSES) SIZE(500000 10000
100) ALLOCATE(*YES) so that it will have an initial allocation of
500,000 records once it is cleared, instead of letting it default to
SIZE(100000 10000 100) and the system having to do extents on it once
it hit the maximum size of 100,000 records?


Maybe. There are two types of extents to be concerned about. Those
extents which are for the LIC-only are probably not too big a deal; the
algorithm is fairly good and impacts mostly other concurrent activity.
However the effect may be a perceived-as "wasted" storage for rows that
may never exist when performing continuously large insert activity [e.g.
copy activity], or for continuously single-row inserts may be
insufficiently aggressive such that many smaller extents are an
unnecessary slowdown for the insert activity. The other type of extents
are those enforced by the SIZE() increments, still at the LIC-DB, but
notified to the non-LIC database. Avoid those for large copy requests
especially; the events and messaging to inform of very inefficient\small
increment sizes is massive overhead. Those notified extents can be
avoided either using SIZE(*NOMAX), or choosing an initial size that will
hold all of the rows; or at least some large size approaching an
absolute minimum number of increments as effect, by choosing both a
large increment size and a near-large-enough to hold all rows as the
initial size.

The best bet is [as asked] to avoid both types of extents by choosing
a near\known final-size with ALLOCATE(*YES); i.e. the "wasted" storage
for rows is by your own choice vs an algorithm.

Regards, Chuck

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.