× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



"James H. H. Lampert" on Mon, 22 Sep 2014 09:32:31 -0700 wrote:

Is there a good way to determine what records were lost?


Note: This is my first attempt posting using the list vs NNTP;
hoping all goes well.

Use the arrival sequence access path to read each row separately,
forward through the file member, until a condition is reported
diagnosing the record can not be accessed. Until a record is accessed
without an error, position to and read the next record after each
prior relative row that could not be retrieved. From that position,
continue to process via the arrival access path to read each row
separately, forward though the file member again until a condition is
reported diagnosing the record can not be accessed. Repeat the
position and read past the inaccessible records and then the
sequential reads until the End Of File (EOF) is reached.

The above can be done using the CPYF using numeric specifications on
the FROMRCD() parameter while an override is in place for the
FROMFILE() to have SEQONLY(*YES 1); the ERRLVL() parameter should
indicate no errors are to be ignored, such that the first inaccessible
record will cause the Copy File request to terminate; the successive
requests for which the From Record value is incremented beyond the RRN
that could not be copied effect the position and read, but implicitly
will continue with the arrival sequence.

Those RRNs that can not be accessed are those which were /lost/ to
the inquirer. Review of the data for each of those RRN from the last
save and\or the journal\log data would enable determining what those
records were [beyond just an RRN]..


Does anybody have any tips on determining the underlying
cause(s)?


The Database [and perhaps other] VLic Logs (VLOGs) preceding the
first partial damage Vlog reported for that dataspace would help to
identify the origin; along with any supporting docs, such as the
joblog(s) of the job(s) operating against the member when the original
damage-set condition transpired. IIRC the damage-set should have been
recorded in the history with a CPF8100 or CPF8200 range of message in
the history; DSPLOG QHST MSGID(CPF8100 CPF8200) PERIOD((*AVAIL
*BEGIN))


We got a call from a customer, who turned out to be throwing
CPF5030s on one particular file of our CRM product.


Typical recovery would be to move the database file network to
another library, end the journaling of the moved files [or done before
moving], restore the DBF network into the product library, apply the
journaled changes since the save up to the point of consistency, then
reapply any missing changes\transactions; applying all journaled
changes and then either correcting or reversing any inconsistent
changes is another option. Note: if the consistency was properly
maintained, e.g. due to atomicity of the updates [per commitment
control], then merely repeating each of the failed transactions [since
and per the damage] after applying all journaled changes is
sufficient, because there are no consistency issues to be concerned
with.

Regards, Chuck

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.