× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



With all due respect, there are some downsides.

First of all, you will be reading the file keyed, so that's more overhead.

Secondly, when you open the file, the job opening it sets up a "deleted
records map" (my term, not necessarily IBM's), to determine where it can
deposit deleted records, until it has to deposit them at the end of the
file.  Every job opening the file has to do this task, and different jobs
starting at different times will get different maps.

In the early days of REUSEDLT (when CPW was very low) we noticed a
performance hit when a job filled it's deleted records map, and started to
deposit records at the end of the file.  That hit may have been fixed, or
it may now longer be noticeable with faster systems.

Al

Al Barsa, Jr.
Barsa Consulting Group, LLC

400>390

"i" comes before "p", "x" and "z"
e gads

Our system's had more names than Elizabeth Taylor!

914-251-1234
914-251-9406 fax

http://www.barsaconsulting.com
http://www.taatool.com
http://www.as400connection.com



                                                                           
             "Don Tully Sr"                                                
             <donald.tully@att                                             
             .net>                                                      To 
             Sent by:                  "'Midrange Systems Technical        
             midrange-l-bounce         Discussion'"                        
             s@xxxxxxxxxxxx            <midrange-l@xxxxxxxxxxxx>           
                                                                        cc 
                                                                           
             09/26/2006 02:06                                      Subject 
             AM                        RE: File that has records           
                                       constantly being added and deleted  
                                                                           
             Please respond to                                             
             Midrange Systems                                              
                 Technical                                                 
                Discussion                                                 
             <midrange-l@midra                                             
                 nge.com>                                                  
                                                                           
                                                                           




There is absolutely no down side to reuse deleted records.  The performance
issues are very close to zero.  All applications that I have written for
the
past 13 years have all files set to reuse deleted records.  It certainly
eliminates the downtime problems associated with file reorgs.

If you must guarantee that records are processed in write sequence, then
you
must add a key, perhaps timestamp.  Obviously the relative record number
will no longer indicate the sequence the records were written to the file.

If you want to go to the effort of using a data queue, that certainly could
also be a way to go.  I have also written many data queue routines for high
performance requirements.

Don Tully
Tully Consulting LLC

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of James H H Lampert
Sent: Monday, September 25, 2006 5:24 PM
To: midrange-l@xxxxxxxxxxxx
Subject: File that has records constantly being added and deleted

Here's the situation:

We have a file. Any arbitrary number of jobs can put
records into the file; a single dedicated job reads the
records, in arrival sequence, processes them, and deletes
them. We thus have a file that rarely has more than a few
active records, but accumulates lots and lots of deleted
ones.

Is there a way to squeeze out deleted records without
having to grab an exclusive lock on the file? Or would it
be more sensible to set it to re-use deleted records, and
modify the processing program to read by key? Or are there
other ideas?

--
JHHL
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.