Hello, Reeve,

Here is a scenario I can recount from an actual customer site -- I was on-site to do some training, early this century, and one day, in the middle of the week, many of the IT staff were in a panic because one critical file in their application could no longer accept any new records (inserts).

The physical file was originally created before the arrival of the relatively new REUSEDLT(*YES) feature.  I looked at the file details with DSPFD and  we discovered  that this file, which had reached the maximum size of 4 billion records, (a 4-byte 32-bit unsigned binary integer number tracks the # of records in each member) was actually 90% comprised of deleted records.  :-o  

Since no one had noticed, and the file had grown larger and larger, over time, it was now so large that a RGZPFM would take longer than a 3-day week-end.  This was before the arrival of the "Reorganize while active" feature that allowed to stop a re-org and then restart it again later.

Fortunately, we were able to change the file with CHGPF to specify the then relatively new REUSEDLT(*YES), and fairly quickly, the applications that were writing to or updating records in that file began working once again.

However, the file still showed this amazing number of deleted records.  

It turned out that a very bad application design is what led to this problem.  The application would select some subset of records from the file, load them into a subfile, and then it would DELETE those records from the database, to prevent any other users from getting in there and updating those records while they were "in use" and "under review" in the subfile.  Then, the user could page up and down, and change any desired records in the subfile.  Finally, when the user pressed F3=Exit, the application would then write out ALL of the subfile records back to the database once again.

This application should have been using normal record locking and commitment control.  But, like I said, it was poorly designed, perhaps a carry-over from an old S/36 design.   And, back then, none of their database files were even journaled.  :-/    One can imaging the problems if the system "crashed" (power loss, etc.) while a user had a few thousand records loaded into the subfile, but now deleted from the physical file, and had not yet pressed F3=Exit.

Anyway, my point is, your situation could be similar.  That file may have just accumulated all of those deleted records over many years.  And, without that REUSEDLT(*YES), the file will just grow bigger and bigger, until it "hits the wall" as described above.   Changing the file to REUSEDLT(*YES) will not magically clean up all of those deleted records.  You would need to use the 
newer Reorg-while-active features of RGZPFM, to clean out those deleted records.

I hope this sheds some light on what may be happening and why.

All the best,

Mark S. Waterbury




On Thursday, January 29, 2026 at 08:59:55 PM EST, x y <xy6581@xxxxxxxxx> wrote:

My thanks to those who responded.

I understand the "old" reorg requires exclusive access; I use
reorg-while-active for smaller files with a small number of deletes but
these files are usually low-volatility master files where the net deletes
are greater than the net inserts; hence, deleted record count > 0.

This is not an application issue--I know the business and I know the
application.  At 2,000 orders a day and 8,800,000 deleted records, each
order would have to insert and delete 4,400 records...and for that volume,
my journal receivers would be hundreds of gigabytes.  Running a receiver
audit halfway through the workday shows about 1,000 inserts (PX's) so
delete space is being reused.

There is one more possibility: the previous developers purged years of old
data from the files but failed to reorg to remove the dead space.  I'd be
surprised (given the speed at which a large file can reorg and knowing the
previous admins were very careful) if that were the case but it may be the
Occam's Razor answer.  I will send the email.

-rf


On Thu, Jan 29, 2026 at 2:10 AM Birgitta Hauser <Hauser@xxxxxxxxxxxxxxx>
wrote:

The first thing I'd do is to reorganize these files.
BTW it is also possible to interrupt an RGZPFM ... and restart (continue)
the reorganization later.
After the tables have been reorganized, I would watch how the total number
of records progresses in relation to the deleted records.

BTW deleted rows are also bad for unkeyed reading (with native I/O) or a
Table Scan in SQL, because all rows (including the deleted ones) are read.


Mit freundlichen Grüßen / Best regards

Birgitta Hauser
Modernization – Education – Consulting on IBM i
Database and Software Architect
IBM Champion since 2020

"Shoot for the moon, even if you miss, you'll land among the stars." (Les
Brown)
"If you think education is expensive, try ignorance." (Derek Bok)
"What is worse than training your staff and losing them? Not training them
and keeping them!"
"Train people well enough so they can leave, treat them well enough so they
don't want to. " (Richard Branson)
"Learning is experience … everything else is only information!" (Albert
Einstein)


-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of
Reeve
Sent: Thursday, 29 January 2026 08:43
To: midrange-l@xxxxxxxxxxxxxxxxxx
Subject: REUSEDLT not reusing

I'm looking at a dozen reasonably busy files with REUSEDLT(*YES) and having
a large number of deleted records.  One example: ~780,000 records,
~8,100,000 deleted records, record length 110 bytes, no extraordinary data
structures or attributes.  The files in question are journaled with "after"
images and all were created in March 2000 (almost 26 years ago--hard to
believe!).

There is no app that deletes that many records in one shot.  I'm ruling out
the possibility the app fired up with 8,800,000 deleted records.  I don't
remember extactly when this feature came out but I do have a faint
recollection of having to recompile my PF's to make REUSEDLT work properly.
Or is my memory what's not working properly?

My plan: CHGPF to cycle REUSEDLT off/on, RGZPFM, and watch the number of
deletes for the next week.  Next step: CHGPF with the source member.

I'm grateful for any advice.

--rf

<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai
gn=sig-email&utm_content=webmail
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>

Virus-free.www.avast.com
<
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai
gn=sig-email&utm_content=webmail
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2026 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.