× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Michael

hopefully someone will come up with a better answer for you , but when I
was faced with a similar dilemma - a layout change for a file with 140
million records and thirty logical files - I came up with the following
approach:

1. Save the file (Of course ! :) )
2. Drop all the logical files
3. Rename the old file
4. Create the new file.
5. Write data from the old file to the new file. I did this
programmatically using record number ranges so that I could read the file
sequentially (as Booth was alluding to) and also so I could override the
file to use  blocking to optimise my disk IO (that was my theory anyway).
When setting the limits for the file reads (i.e. calculating your record
ranges) be aware that deleted records need to be included in the range
number calculation.
6. I also did the writes in parallel in chunks of 10 million. On a 2-way
820 I found four jobs in parallel was about right but did not have loads of
time to improve or tweak this. YMMV.
7. After the new file was created I added the logicals back. There is
significant advantage in ordering the logical creation so that later
logicals can take advantage of keys that already exist where possible (see
"implicitly shared access paths" in the manual for some detail on this).
Basically I created the longest keys first and then tried to arrange the
later logicals so that their keys where possible utilised either whole or
part of keys that had already been created. This may take some
experimentation :) but should save both time and disk space.
8. If at all possible put the business critical access path first - there
may also be some option to reconsider the access path maintenance of some
logicals to make it delayed - for instance logicals that only get used once
a month for reporting.

Dropping the logicals certainly helped things as it prevented storage
getting wildly out of control in addition to allowing me to write all the
data across sequentially rather than using a key (which is much slower) You
will need to make sure that all your unique keys will remain unique. If you
are adding new logicals it might be better to create them on the original
file to validate them before doing the copy (that is how I would probably
do it) but my experience is that this has never been a problem - if the
data is already ok then this has not been an issue or has already been
considered and taken care of prior to the actual databse conversion.

Hope this helps - I will be interested to see what other suggestions you get.

Regards
Evan Harris

--
[ Picked text/plain from multipart/alternative ]
Hello folks,

Our product has a main file (with a unique key on the PF, plus a LOT of
logical files) that can grow to pretty large record volume (25M to 200M
records).  We are needing to change the way we do a new release with a
database change to that file.  The question I have is this - what is the best
way to update the database file definition and translate the data to the new
definition for a large record count?

CHGPF changes the file "serially" which is terribly slow.  I have written a
process that submits a given number of "parallel" CPYF commands, but it
appears that the constraint is in building the UNIQUE access path of the PF.
Once I get to 5 or 6 jobs for a 3.5M record test data set on our
two-processor development box, I cease to get runtime improvement.  It takes
1:20 plus/minus 3 minutes.  (an hour and 20 minutes)  I have removed all
logical files so that the only index being built is the UNIQUE one on the
physical file.

I hope the question is clear.  What do you folks think?

Thanks in advance,
Michael Polutta
Atlanta, GA


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.