×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
I guess the problem I see with the copy approach is that the file will be
out of date when you restore it. You don't really say how you use the file
so this option is a bit confusing. Is the large file only used to read from
?
Also, the best approach will depend a bit on how many records you want to
purge and whether the window is insufficient for the deleting or the
re-organizing; if the latter then Pete's suggestion is you best bet.
Another approach worth investigating is re-creating the file with the
required data rather than deleting them doing a RGZPFM. The rough approach
would be to:
- delete the logicals
- rename the existing file
- CRTDUPOBJ to the old file name with DATA(*NO)
- Write the required data back into the file programmatically or via CPYF
(use FROMRCD 1 !)
- recreate the logicals
Writing the data back using a program allows you to run a number of jobs
reading different sections of the source file. Just divide the file up by
RRN and then you can read/write concurrently (how many jobs you can run
effectively depends on your hardware). You should also be able to use
blocking on the file to achieve better IO.
When rebuilding the logical examine the key structures; rebuild the most
complex ones first as the later keys will be quicker to build - they may
even end up being implicitly shared and take no space and no time to
recreate (this can be a not so good thing in some application designs). If
you rebuild the logical manually you also have the opportunity to run more
than one job stream to create them. Probably best to analyze the key
structure(s) to decide what sequence to run them in and how many streams you
can run.
Hope this provides some food for thought anyway.
I'd love to have a crack at this :)
Regards
Evan Harris
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Jonathan Mason
Sent: Monday, 8 December 2008 11:19 p.m.
To: midrange-l@xxxxxxxxxxxx
Subject: Most Efficient Way of Purging File
We have a file that has an enormous number of records in (just over 1.9
billion) and want to clear it down a bit to reclaim some much needed
DASD. The trouble is that because of the file size our standard purge
routine takes longer than the available 17 hour window that we have for
running housekeeping tasks.
At the moment we think we have two choices available to us:
1. Use SQL DELETE to remove all the records we want to purge and then
run a RGZPFM over the file
2. Restore a copy of the file from tape to a second server, and use CPYF
to create a new version of the physical with only those records we want
to retain and then restore that over the original. We think this will
automatically perform the RGZPFM as part of the restore/copy.
Does anybody have any experience or advice on performing this sort of
"purge", ideas on how to improve performance, etc, etc.
We're running V5R4 and there are eight logicals over the file.
Thanks
Jonathan
_______________________________________________________
This message was sent using NOCC v1.14 webmail software
_______________________________________________________
As an Amazon Associate we earn from qualifying purchases.
This thread ...
RE: Most Efficient Way of Purging File, (continued)
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.