× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



You lost me at your first statement.

I am thinking that you HAVE to rely on RRN when processing the journals. How else can you compare the BEFORE & AFTER images and be sure that you are comparing the same entity?

When I say "farm the journals" I am referring to the same thing you are doing; ie "read the journals".



-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Tuesday, November 19, 2013 8:36 AM
To: Midrange Systems Technical Discussion
Subject: Re: estimating RGZPFM runtime

1 - We journal like crazy and use it to audit transactions. We do not
rely upon RRN. Then again, we read the journals - not some journal farm
created by processing the journals.
2 - Reorg while active, or reorging while not active, will have the same
effect. RRN's are going to change. You may want to consider the one
poster's idea about disabling the journal during the reorg. Modify his cl
by adding your journal farm request right before the reorg, if you still
want to stick with your journal farm.
3 - If you can adapt your train of thought to accept the RRN change
associated with reorganization you might want to consider reuse deleted
records. The only difference being is you can snapshot your journal farm
around the reorgs, not around the reuse.
4 - You may want to consider the KEYFILE parameter of RGZPFM. This can
help your performance on reading. Use the DB tools to see the most
popular index or suggested index.
5 - If you do start using reuse deleted records, and the number of deleted
records doesn't get terribly skewed, then you should never have to do a
dedicated RGZPFM. For example, if you're going to have roughly the same
number of deleted records every night due to some process you may as well
not try reorging to get the disk space back since it will need it again
the next night. However, if you recently discovered that you're dropping
from retaining 15 years of history to just 2, then by all means, reorg to
get the space back. Or one oopsie runaway job really stuffed it full of
garbage you cleaned out, then reorg to get that space back.

Trying to duplicate the reorg on a different lpar, test library, etc can
be a terrible waste of time. Unless you're very certain all things are
equal: memory, disk, controllers, current load on system, etc. Well,
normally your second setup is less powered than your primary so it may
show you 'worst case'. Keep in mind that some indexes may be configured
for delayed rebuild and be doing that in the background so you'll have to
check out some system tasks for that to see when it's 'really' done. But
if you do the duplication by some process that inherently does a reorg,
like certain options of CPYF or CRTDUPOBJ, then your results will again be
quite skewed.


Rob Berendt

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.