My attempt to respond to the prior post got too messy. So I will
respond here to suggest that for the /no active data/ case, I think
there may be a defect. Why thinking defect? Well, and sorry about the
flow being somewhat /thinking out loud/...
That conclusion suggests an oddity, probably worth review by the
development team. What is described is a case of /no active rows/
versus /no data/ versus /active rows/. In the /no data/ case I would
expect that the access paths would be rebuilt inline to the restore. I
would expect that for the case of /no active rows/ the outcome would be
identical to the /active rows/ case, such that the access path would
both save and restore. It is possible the development team is not aware
of this outcome, and your scenario points out at least how the async
rebuild can be a problem for a scenario with a UNIQUE keyed logical. Or
perhaps if known, a missing PTF for QDBSVPRE and/or QDBSVPST.
For the /no data/ case, I believe the rebuilds are done inline to the
restore, rather than asynchronous in the QDBSRV## runpty-52 jobs. As I
recall, they were rebuilt inline [synchronous and serially] in the past
[just as RGZPFM does by default], and would be indicated as such by
status messages flashing by during a restore in an interactive job
suggesting that "Access path for file &1 was rebuilt". In fact I recall
there were complaints about those messages. Perhaps the conclusion that
they are neither built nor restored is for lack of visual notice of the
flashing messages? They would appear in a trace.
If not done synchronous for all files, it should probably be done at
least for any UNIQUE defined access paths, specifically to prevent that
type of problem. Defect or not, the RGZPFM before save should serve as
resolution [even if the restore feature were changed].
Hmmm... memory blurry, having thoughts about inline rebuilds.
Perhaps it was after one of the problems that was exposed by the
changes, that the no-load with rebuild for /no data/ case was replaced
by either a fixup-proc or a create-as-valid-and-empty feature. If so,
then I wonder if save sees the /no active rows/ member as /no data/, but
the restore recognizes that in fact it is not actually a legitimate /no
data/ case but is instead a /no active rows/ case, and thus such a new
feature would not be functional for that case. If prior to a new
feature it was inline rebuild after load, then this problem would not
exist even if there was confusion between save & restore on the
definition of /empty/. I think that might be the problem.
Really the ACCPTH(*YES) never should have had its definition vitiated
by a change to an effective ACCPTH(*MAYBE). That is, the code change to
enable deferral of access path activity to restore versus save for
/empty/ members, should have been a new\separate option like
ACCPTH(*DATA). With separate options any problems would be more obvious
from the choice on the parameter, and the other choices might serve as a
circumvention. If I had had my way, the change would have been done
only in a new\next release with a new keyword; or not done at all. I
had predicted that it was going to prove a very problematic change, and
I was proven a good prognosticator of outcomes for the [bad] decision.
My first concern, I recall clearly, was that the number of events [on a
restore of many thousand keyed but empty members] would flood the system
with access path rebuild events and due to a known issue could
[compounded by the ¿runpty-9? qdbsrv01] cause the system to effectively
hang due to thrashing. That is why I recall the code was changed to
synchronous rebuilds during restore. Anyhow if that since changed to
something other than inline builds, development probably needs to review
your scenario for consideration as alluded above.
Regards, Chuck
craigs@xxxxxxxxx wrote:
Just a clarification on my previous post after doing some detail
tests: SAVOBJ an empty physical file with deleted records and any
logicals over that physical. Then, later, RSTOBJ to an empty library
like QTEMP triggers the rebuilding of access paths. RGZPFM with
MONMSG(CPF2981) before the SAVOBJ ensures the access paths are
"cleaned". This ensures that those QDBSRVxx jobs running to rebuild
access paths don't cause CPYF to those files to blow up. If the file
has data, the access path will be saved and therefore restored. So,
that would not cause rebuilding the access path. If the file is empty
(no current or deleted records) it neither restores an access path
nor submits an access path rebuild.
So, to effectively stop CPYF from blowing up with CPF5090 then
CPF2972 after a RSTOBJ, make sure the access paths aren't triggered
to rebuild by making sure deleted records are removed BEFORE the
files are restored. RGZPFM with MONMSG(CPF2981) before SAVOBJ and the
access paths will be cleaned inline with the RGZPFM (no QDBSRVxx
jobs used). Note that this doesn't stop all RSTOBJ access path
rebuild submissions such as restoring a physical without the logical.
As an Amazon Associate we earn from qualifying purchases.