Thanks to all for your responses. I'll pass them on, and if he doesn't like
them, I'll tell him to ask his former Global Services colleagues for their
input (if they're still in the USA)

I'm feeling evil today........ :-)

Paul Nelson
Cell 708-670-6978
Office 409-267-4027
nelsonp@xxxxxxxxxxxxx

-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of
CRPence
Sent: Friday, October 16, 2015 12:48 PM
To: midrange-l@xxxxxxxxxxxx
Subject: Re: RCLSTG formula

On 16-Oct-2015 07:28 -0500, Paul Nelson wrote:
Here's one for the books. At one of my clients, the manager (the
former IBM Global Services guy), wants to know the formula used by
IBM to calculate the amount of time it will take to do a full RCLSTG.

He didn't appreciate that IBM would say, "It depends". :-)

Anybody got a clue? Anybody know which group in Rochester would be
able to answer this?
<<SNIP>>


No idea about /estimating/ [other than what has already been noted by
others, mostly according to the data stored in the reclaim storage
DtaAra] for on-system estimates, and outside of that, I am unsure if
there is any /formula/; just a document [now they are called TechNote
documents] that once published some numbers for a set of somewhat
nebulously-described customer-scenarios.

Effectively, because what the full-reclaim performs, the typically
most time-consuming is also the most predictable. The feature basically
divides the /Permanent Storage Directory/ (PSD) and then examines every
segment to inquire its purpose and discard any that are not properly
tagged as in-use though thought to be by the LIC Storage Management (SM)
[ugh! used to be that the LIC reclaim instruction only did that if the
segment also was identified as being established as in-use on a *prior*
IPL, but that is a topic of its own nightmare]. So anyhow that time is
quite linear based entirely on how much permanent storage is reserved
[temporary storage directory is ignored entirely], how great is the
percentage of the actual segments actually belong to objects; mostly
skewed upward or downward by the speed of the processors, the speed of
the disks [where permanent data resides], and the amount of memory. The
reclaim instruction operates as multiple LIC tasks in performing that
processing, so all those can matter for that phase of processing.

Then comes the complexity and the requirement of /it depends/ being
the answer. The /complex objects/, those that are made-up of a
composite of other objects [other than the /associated space/] rather
than being a simple space object or another object that can have nothing
more than an associated space [or two]. For all those objects [complex
or otherwise] have their pointers stored in effective buckets; any
object that is damaged is, if an internal object of a type deemed
inconsequential the object is deleted, otherwise notification of the
damage is made, thus the primary function\effect of the reclaim is
"Damage Notification". Those buckets are then passed to [from the
reclaim feature as arbiter] and processed by the owning-component of the
complex object types for them to do their own processing; things like
Database being notable, for ensuring all the composite pieces of a
database *FILE all /point to each other/ properly. The database also,
when a *FILE has finished the verification, will enqueue a "create
database file" operation on the Database Cross-Reference queue so that
the QDBSRVXR and QDBSRVXR2 jobs will record the details in the catalogs;
running concurrent\asynchronous to the reclaim requester, so that work
can become backed-up, but usually not so much as when the RCLSTG
SELECT(*DBXREF) is performed separately wherein a huge list of files are
gathered quickly and then add\crt requests added to the queue in great
numbers in short time, because the database-reclaim function is serially
processing and adding as each object is finished being reviewed. And
there is also a kicker involved [I presume still], that Referential
Integrity (RI) relations must, at least for one direction, await the
clearing of the work on the queue before continuing, so if the
background QDBSRVXR job ever is bogged-down while the reclaim requester
is running, that job can have waits that might happen by chance
according to the order the segment addresses were encountered\retrieved
from the PSD and thus the order the object pointer aligned within the
bucket -- surely a case of effective randomness qualifies for a
legitimate "it depends" :-)

So anyhow, the more database objects a system has, the longer that
phase of processing; similarly for any such component-specific work, the
more objects of their type, the longer to verify relationships and
pointers, and to move broken stuff to QRCL#### when stuff was, upon
review, deemed broken [broken is mostly limited to objects not in a
context, i.e. not in a library, and thus moving the objects to the QRCL
for the iASP or the SYSBAS, is essentially just making the object
visible through the container for that object type; and I suppose a
similar concept for non-library objects].

Similar component-specific work [though probably most serially,
unlike the *DBXREF] occurs for other object types; the bigger ones being
DLO, DIR, SPLF, UDFS might be for which the OMIT() and SELECT() gave
options to separate their overall work of reclaim that is presumed to
have some capability still to try to analyze consistencies despite the
PSD not having been processed, and then many other object-types like
SBSD [I presume] that have pointers to related objects might be
separately-callable component-specific, or their processing might be
in-lined within the Reclaim (RC) component's spinning of the object
buckets.?.? For example, the "Authority Recovery" for simple object
types might also handle certain types of complex object types
conditionally, perhaps by code to /run mandatory pointers/ assigning the
ownership to QDFTOWN without having to defer to the component as
object-owner [i.e. the component as owner of the object-type]; Database
(DB) does almost all /phases/ of work, including that authority
recovery, because processing the Database *FILE (DBF) objects is not as
easily in-lined as an object that might have a known typed and
authorized pointer at a specific offset that effectively blindly can be
assigned to the default owner -- the DBF has objects that are /shared/
and must be owned by QDBSHR or QDBSHRDO.

Other things like "amount of damage" often noted as time consumers
are generally not much of an issue. What is typically referred to as
"damage" is not /damage to the objects/ [which just effects a message
being sent; hardly expensive]. Instead the "amount of damage" comments
refer to the stuff being done to complete pending operations, like
interrupted requests such as a Delete File (DLTF) operation that was
ended by a crash or a defect or a process terminate that did not allow
time for the job to complete [thus why always using only OPTION(*CNTRLD)
and a /reasonable/ amount of time DELAY() specification to give the
applications as well as the OS the opportunity to finish performing any
necessary /cleanup/ work in response to the End Job (ENDJOB) or End
Subsystem (ENDSBS) that implicitly effects the ENDJOB of the jobs the
SBS /monitors/. The time for that work is generally very short, except
following a crash [or long after other crash(es) after which no reclaim
was run], but typically what were problems effected by the crash are
/interrupted processing/ and most are often identified immediately after
IPL because the new attempts to continue or restart work gets a "can not
proceed due to prior interrupted operation"... so such problems most
often are resolved by deleting the problem objects [after possibly
extracting all or changed data] rather than using the Reclaim Storage to
effect the recovery.

FWiW that complete-interrupted-work [cleanup] processing ¿(QRCIMPLN)
IIRC? probably can be more easily measured by preceding a reclaim
storage by a reclaim of objects owned for a user [the Reclaim Objects by
Owner (RCLOBJOWN) command], because that same processing precedes the
much simpler reclaim user request; a request that could run against a
user with very few objects. Though contemplating that, performing that
work might be conditioned on actually finding an external object owned
by the user that is /not in context/... so perhaps that cleanup feature
might not _always_ run with that simpler request.


This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].