× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I have a job that runs every night that effectively replicates a library
from our production box (810 - P10 processor with 8 gb ram) to our test
box (270 - P10 processor with 4 gb ram). Both machines are running V5R4
and are current with ptfs.

The job works like this.

10:30 pm on production: Six files are copied into a library named
BKPyymmdd. When this finishes the BKPyymmdd library is sent to the test
system using savrstlib. No problems here. When this finishes the identical
BKPyymmdd library exists on both systems.

2:00 am (after backups of both systems run at midnight), an identical job
starts on both systems that populates the 'audit' library on each system
using the six data files from the BKPyymmdd library. This process first
does a clrpfm on the six data files in the 'audit' library, and then does
a 'chglf xxfile MAINT(*REBLD)' for all of the logical files/sql indexes
that exist over these six physical files - (there are 23
logicals/indexes). Then cpyf is done (starting at RRN 1) to repopulate the
files, and when all of the cpyf's finish 'chglf xxfile MAINT(*IMMED)' is
done to rebuild the indexes. This method seems to be the fastest method
I've developled for completing this task overnight. The files are rather
large, three of them have over 25 million records (and there are no
deleted records).

The question is this: While this job runs on both sytems simultaneously,
when I monitor the access path rebuilds via EDTRBDAP, there is a column
that shows the estimated time to rebuild the access paths, but the funny
thing is that on our production system the estimated time is 30 to 50
percent greater than the estimated time for the same indexes on the test
system.

Our production system has double the ram and more cpw (810 vs 270). So why
would the rebuild of these access paths finish faster on a small box with
less horsepower? Granted there are a few other jobs on the production
system that do run overnight but typically these are queued and last night
(4:30 this morning when I checked it) there were only two other batch jobs
running. The test system is also our Domino Email Server so it's not
always idle overnight either. The production system also has more
available/free diskspace too.

I expect the usual answer - "It Depends"... but am looking for other clues
as to what could be causing this type of behavior, that results in the
longer access path rebuild time on production.

Anyone with any thoughts?

Regards, Jerry

Gerald Kern - MIS Project Leader
Lotus Notes/Domino Administrator
IBM Certified RPG IV Developer
The Toledo Clinic, Inc.
4235 Secor Road
Toledo, OH 43623-4299
Phone 419-479-5535
gkern@xxxxxxxxxxxxxxxx


This e-mail message, including any attachments, is for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized use, disclosure or distribution is
prohibited. If you are not the intended recipient, please inform the
sender by reply e-mail and destroy this and all copies of this message.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.