×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
On 30-Oct-2015 12:32 -0500, Jerry Draper wrote:
On 10/29/2015 10:52 AM, CRPence wrote:
On 29-Oct-2015 12:35 -0500, Jerry Draper wrote:
In a recent DR simulation we restored some libraries with lots
of DDM files.
The target system of the DDM files was NOT on the network as
this was a simulation.
The restore operation of each file failed after a timeout of 315
seconds.
There are many, many DDM files so at 12/hour we weren't getting
anywhere fast.
There are no break messages so you only find out after wondering
why the restore is taking so long.
How can this be managed so we can restore the files and NOT
validate the remote system?
What is the actual information from a spooled joblog? Just a SWAG,
that [the messaging in the joblog will reveal that] the timeout is
nothing to do with contacting a remote system, and instead is a
functional issue with the *DBXREF.
FILE DDMRTNFLAG not restored to DDMSOURCE.
Dequeue operation not satisfied in 315 seconds for queue &1
<<SNIP>>
Sadly, such a copy\paste of the messages [apparently taken from a
command-entry window] gives no context. So although I might infer
correctly from what was given, that the first is the error CPF3706 and
that the second is msg MCH5801, that by itself is not meaningful.
There is too little meaning, because I have no idea what was the code
that issued the error nor to what code the error was directed; i.e. the
context, aside from knowing the request, noted as being a Restore
Library (RSTLIB). The spooled joblog [DSPJOBLOG OUTPUT(*PRINT) of the
job running with LOG(4 0 *SECLVL)] gives much additional necessary
context. Note: I expect that the order of the two messages shown, are
reversed, such that the second shown is actually the error precipitating
the next [but snipped] error, and that what precipitated the first error
shown was not included in the copy\paste output.
Even without the context revealed, I am [somewhat more] confident
that the origin of the issue is likely to be a failed *DBXREF; that
until the *DBXREF functional issue is resolved, the ability to perform
generic /object/ operations on DDM Files will exhibit the same timeout
error. And there will be other operations that would fail instead with
an error indicating an issue with the *DBXREF, similarly exhibiting a
timeout, or even possibly to hang indefinitely awaiting feedback via the
internal\temporary queue that will never arrive given the background
QDBSRVXR job is not [properly] operational. Note: An attempt made to
resolve a functional issue with the *DBXREF by using a request to
Reclaim Storage (RCLSTG) that includes the *DBXREF [thus similarly use
of the Reclaim Database Cross-Reference (RCLDBXREF)] should *not* be
done, unless the functional issue has been investigated and known to be
resolved by that request; the reason is, that in order for the reclaim
request to be functional, the *DBXREF must either be functional or
either expected-to-become or known-will-become functional rather than
possibly just exacerbate or perpetuate the problem.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.