On 15 May 2013 13:45, Raymond B. Dunn wrote:
It turned out to be a damaged object.
A logically damaged file might be the effect versus a cause of the
errors with the DBXREF. Recovery from logical damage is supposed to [at
least used to] explicitly /overlook/ most inconsistencies with the data.
A physically damaged file is at least somewhat likely to give origin,
to cause, an inconsistency error with the DBXREF.
I would not be surprised if instead of a specific incident of a
damaged object being the cause, the problem was /explained away/ as
being caused by a "damaged object", and that explanation was accepted.
We had to execute a reclaim Database Crossreference.
The only recovery is to correct the *DBXREF data, but whether that
can be done by RCLDBXREF or RCLSTG may be an issue.
This fixed the problem, if anyone else ever encounters it.
FWiW the "reclaim Database Cross-Reference" is somewhat ambiguous,
thus so is saying "This fixed the problem". Either of the commands
Reclaim DB Cross-Reference (RCLDBXREF) and Reclaim Storage (RCLSTG),
each having variable invocations, are capable of correcting an issue
with the *DBXREF data. However each invocation uses different means and
refreshes the data to a different degree, to effect a fix to the *DBXREF
data.
Be warned however, that a request to reclaim the *DBXREF information
runs asynchronous to the requester, so any RCLxxx request that includes
the *DBXREF must only be performed to recover data errors with the
feature. A RCLxxx request that includes the *DBXREF should not be
performed as an attempt to recover a functional error with the system
jobs that implement the feature; i.e. if the QDBSRVXR or QDBSRVXR2
system jobs are reporting repeated "help me I can't get up" messages vs
simply reporting occasional "I seem to have found inconsistent data"
messages, then always fix the functional problem before trying to ask
those jobs to correct their data... which they can not do while they are
non-functional.
Evidently, this is an on-going issue. Once we got to the right
person, it was a "Oh yeah that happens, it's fixed by...." response.
As I noted, there is only one recovery [irrespective of the command
used to effect that recovery]. Thus that "it's fixed by..." response is
always appropriate. But IMO "the right person" would have suggested
first to have gathered documentation, or actually done so even if that
doc were left on the system, to assist in review of\if a new occurrence
of the apparently /same/ incident transpires again [soon].
To say that the receipt of the condition is "on-going" seems to me
like a mis-characterization, or at least seems to have a negative
implication; as if to say that *DTAARA damage is "on-going" if the
response had been to suggest that the error "is fixed by DLTDTAARA" due
to that request being the correct and only complete recovery for a
damaged data area.
Of course if one never determines the actual origin for the issue,
and that error continually recurs due to never having implemented
something that prevents the condition [most likely with a preventive
PTF], then "on-going" would seem to be an appropriate term to describe
the issue. But the noted condition CPD32B0 RC7 should almost never
happen on any system, and is, with the exception of origin by CPF32A1,
almost always has an origin from a defect [or like with CPF32A1,
possibly occur per loss of main storage; e.g. due to a crash with power
outage and then loss of protection from a UPS].
As an Amazon Associate we earn from qualifying purchases.