× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



DANGER DANGER WILL ROBINSON!!

That's RMVM, not DLTPFM, please!

And ADDLFM on the logicals, not ADDPFM.

That's all, folks!

Wait, not all - I suggest submitting the ADDLFM - your interactive job will be inhibited otherwise for each one. Even better, have a CL that does them in the order that facilitates sharing the access path (use ANZDBF and ANZDBFKEY to figure out which logicals could share - look at the second spooled file from ANZDBFKEY).

Dan Kimmel wrote:
DLTPFM on all logicals.
Save a copy of the file. Do not save access path.
CLRPFM
Restore the copy to another name.
Copy the records you want to keep back to the original.
ADDPFM on all logicals.

A couple tricks in this: Dropping the logical members (DO NOT delete the
logical files, just the members) removes the system's need to maintain
the logicals on every change to the physical. You won't need the access
paths on the save/restore, saves some time and space. CLRPFM is very
fast (almost instantaneous no matter how many records) and removes all
deleted records eliminating the need for RGZPFM. The restore will go
fast as it is just the data. Use CPYF or SQL to copy the records you
want back into the original. Each ADDPFM will rebuild the access path in
the logical over all the records in the physical, seems to be much
faster than maintaining the logical access path as each record is added
on large files.

We do this regularly on files of 10 million records or more and get it
done in well under an hour on really slow machines. I should think even
your 2 billion record file would work in less than 10 hours provided you
have a fairly quick machine. (And you must have a fairly quick machine
to get 2 billion records in the table in the first place.)

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Jonathan Mason
Sent: Monday, December 08, 2008 4:19 AM
To: midrange-l@xxxxxxxxxxxx
Subject: Most Efficient Way of Purging File

We have a file that has an enormous number of records in (just over 1.9
billion) and want to clear it down a bit to reclaim some much needed
DASD. The trouble is that because of the file size our standard purge
routine takes longer than the available 17 hour window that we have for
running housekeeping tasks.

At the moment we think we have two choices available to us:

1. Use SQL DELETE to remove all the records we want to purge and then
run a RGZPFM over the file

2. Restore a copy of the file from tape to a second server, and use CPYF
to create a new version of the physical with only those records we want
to retain and then restore that over the original. We think this will
automatically perform the RGZPFM as part of the restore/copy.

Does anybody have any experience or advice on performing this sort of
"purge", ideas on how to improve performance, etc, etc.

We're running V5R4 and there are eight logicals over the file.

Thanks

Jonathan








_______________________________________________________
This message was sent using NOCC v1.14 webmail software
_______________________________________________________




--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.