On 29-Jun-2014 16:34 -0500, Peter Connell wrote:
Huge file of 74 million rcds has 301 million deleted records and
several logical files.
The Reuse Deleted Records (REUSEDLT) attribute setting plays a role
in such a scenario. Irrespective that setting, the online reorg can
assist to both purge deleted records and make the data [more]
contiguous. The Reorganize Physical File Member (RGZPFM) with the
option to Allow Cancel (ALWCANCEL) enabled can be used with an
allocation locking level that allows other work to continue [i.e. enable
the work to perform online vs requirement to take the file off-line for
concurrent work], as specified with the Lock State (LOCK) option.
Can data management really handle this with little performance impact
if there are regular additions/deletions?
Random I\O [keyed] access is most probable to be the least impacted
re performance [except certain multi-format logical file accesses], and
sequential access is the most probable to be the most impacted re
performance [due to large gaps in sequence for which rows per page can
be an extremely low number including zero].
FWiW, officially the effects re performance are of the Storage
Management [the LIC SM, as accessed via the LIC DB to perform whatever
access method]; what is known as the Data Management [the OS DM],
effectively is unaware of deleted vs active records, in its role of
providing a path to the data [records; essentially only the /direct/
access method can access a deleted row].