You must be careful when changing PF to REUSEDLT(*YES).
We have some 3rd party apps that require REUSEDLT(*NO) for various reasons.
Also, if you can afford several hours of extra down time during IPL, then Chris's idea works.
We cannot be down that extra time so we use RGZPFM while active.
For this to work, PF must be journaled, so the automated process begins journaling, ends when complete.
We run this process monthly, over any file with deleted records, currently runs 1/2 hour to hour.
Years back, this process took 10 to 20 hours, weeks on R&D partition.
There have been significant improvements with RGZPFM with V6R1 and V7R1, not sure about V5R4.
STRJRNPF FILE(&MBLIB/&MBFILE) JRN(QGPL/RGZPFM)
RGZPFM FILE(&MBLIB/&MBFILE) MBR(&MBNAME) +
RBDACCPTH(*NO) ALWCANCEL(*YES) +
LOCK(*SHRUPD) /* rgzpfm while active */
ENDJRNPF FILE(&MBLIB/&MBFILE) JRN(QGPL/RGZPFM)
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Chris Bipes
Sent: Monday, June 30, 2014 11:08 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: impact of deleted records
The data management can handle files with many more deleted records than live records but it is not the best practice.
For random indexed access, not much of a performance hit, but any sequential processing will be affected. All those deleted records are skipped through. You application does not see them but the DB manager reads through the dead space.
First thing to do is change the file to REUSEDLT(*YES). This will stop the file from growing and it will start filling in the deleted space.
I like to have a program that runs at startup of the system that performs maintenance. You make sure the program is owned by QSECOFR and runs under *OWNER profile. This program can have both the CHGPF FILE(LIB/FILE) REUSEDLT(*YES) and the RGZPFM FILE(LIB/FILE) RBDACCPTH(*YES)
This will purge the deleted records from the file and help with sequential processing performance. It will also release a lot of storage being used to hold all of the deleted records.
With 74 million active records, this will take a while to run and re-index. Depending on your hardware, it can be a couple of hours, to only a few minutes.
Director of Information Services
707.665.2100, ext. 1102 - 707.793.5700 FAX chris.bipes@xxxxxxxxxxxxxxx www.cross-check.com
Notice of Confidentiality: This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify me by e-mail (by replying to this message) or telephone (noted above) and permanently delete the original and any copy of any e-mail and any printout thereof. Thank you for your cooperation with respect to this matter.
On 2014-06-29, at 5:34 PM, Peter Connell <Peter.Connell@xxxxxxxxxx> wrote:
Huge file of 74 million rcds has 301 million deleted records and
Can data management really handle this with little performance impact
there are regular additions/deletions?
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l