|
I've got a file with 40 million records, 10 million of which are deleted. The file is over 14 GB. There are 8 access paths built on this file. We would like to attempt a reorg of this file, but are concerned about the downtime we'll experience as this file is used throughout our operations. Is there any reliable way to estimate the time required to reorg? FWIW, this is a model 510 running V3R7 and it has 94 GB of DASD, of which 76% is currently being used, and 392 MB of memory. If I did a RGZPFM, I'd remove the members from the file's logicals before the reorg, and then add them back after. I think I've heard it would be more efficient to do so. Should I also consider duplicating the physical file in a separate library and using CPYF to copy the 30 million records into it, then duplicate the logicals over, then delete the original library's files, then move the newly copied data over? Your "been there, done thats" would be greatly appreciated. Dan Bale IT - AS/400 Handleman Company 248-362-4400 Ext. 4952 +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.