× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Just be careful you don't have any code that uses relative record numbers.
One of my clients suffers from this. It's not the application software that
has a problem, but the way their DataMirror software is set up to extract
information to ancillary applications on other platforms.

Every time the subject of housekeeping is broached, the lady in charge of
DataMirror whines that she would have to reload her databases as a result.
Of course, she can't/won't provide any information regarding reload times.

Paul Nelson
Cell 708-670-6978
Office 409-267-4027
nelsonp@xxxxxxxxxxxxx


-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Chris
Bipes
Sent: Monday, June 30, 2014 10:08 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: impact of deleted records

The data management can handle files with many more deleted records than
live records but it is not the best practice.

For random indexed access, not much of a performance hit, but any sequential
processing will be affected. All those deleted records are skipped through.
You application does not see them but the DB manager reads through the dead
space.

First thing to do is change the file to REUSEDLT(*YES). This will stop the
file from growing and it will start filling in the deleted space.

CHGPF FILE(LIB/FILE)
REUSEDLT(*YES)

I like to have a program that runs at startup of the system that performs
maintenance. You make sure the program is owned by QSECOFR and runs under
*OWNER profile. This program can have both the CHGPF FILE(LIB/FILE)
REUSEDLT(*YES) and the RGZPFM FILE(LIB/FILE) RBDACCPTH(*YES)

This will purge the deleted records from the file and help with sequential
processing performance. It will also release a lot of storage being used to
hold all of the deleted records.

With 74 million active records, this will take a while to run and re-index.
Depending on your hardware, it can be a couple of hours, to only a few
minutes.

--
Chris Bipes
Director of Information Services
CrossCheck, Inc.

707.665.2100, ext. 1102 - 707.793.5700 FAX
chris.bipes@xxxxxxxxxxxxxxx
www.cross-check.com

Notice of Confidentiality: This e-mail, and any attachments thereto, is
intended only for use by the addressee(s) named herein and may contain
legally privileged and/or confidential information.  If you are not the
intended recipient of this e-mail, you are hereby notified that any
dissemination, distribution or copying of this e-mail, and any attachments
thereto, is strictly prohibited.  If you have received this e-mail in error,
please immediately notify me by e-mail (by replying to this message) or
telephone (noted above) and permanently delete the original and any copy of
any e-mail and any printout thereof.  Thank you for your cooperation with
respect to this matter.


On 2014-06-29, at 5:34 PM, Peter Connell <Peter.Connell@xxxxxxxxxx> wrote:

Huge file of 74 million rcds has 301 million deleted records and several
logical files.
Can data management really handle this with little performance impact if
there are regular additions/deletions?


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.