× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



James,

Sometime last year July or August I had the exact same problem.
A call to IBM resulted in some issue that had cropped up and
the did try to help me fix it but the final solution came down to
the hunt for locked objects. I finally used this:

http://www.rpgpgm.com/2015/07/checking-for-locked-objects-in-qdls-and.html

We were doing system saves on occasion and I found it saved me some
grief to run this before I did my backup.

HTH,

Bill






From: "James H. H. Lampert" <jamesl@xxxxxxxxxxxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Date: 03/14/2016 11:47 AM
Subject: Weird CPFA09E/CPD384E on a SAV
Sent by: "MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxx>



This weekend, I discovered some locked-up daily backup jobs, with
CPFA09E and CPD384E messages for three IFS files.

Looking up CPFA09E, I found this page
<http://www-01.ibm.com/support/docview.wss?uid=nas8N1016325>
which says:
If an Integrated File System object is checked out and the UPDHST
parameter is set to *YES (the default), the save operation will
receive message CPFA09E.
. . .
If an Integrated File System object is checked OUT and UPDHST is set
to *NO on the SAV command (as seen in the following example), this
could be used as a work-around

But in fact, the default for UPDHST is *NO.

The only way I could find locks was to bring up iNav (I'd forgotten I
even HAD a working iNav!) and look at the "Use" tab of "Properties" on
them. One has a single job "Shared (All) - Write only," while the other
two have a single job each "Shared (All) - Read/Write."

I tried a command line save of just the IFS directory,
SAV DEV('/qsys.lib/qtemp.lib/foo.file') OBJ(('/foo/bar/db'
*INCLUDE)) SAVACT(*YES) SAVACTOPT(*ALL) CLEAR(*ALL) DTACPR(*HIGH)
and got this:
Save-while-active checkpoint processing complete.
Object in use. Object is /foo/bar/db/bazdb.lck.
Cannot open object /foo/bar/db/bazdb.lck.
Object in use. Object is /foo/bar/db/bazdb.lobs.
Cannot open object /foo/bar/db/bazdb.lobs.
Object in use. Object is /foo/bar/db/bazdb.log.
Cannot open object /foo/bar/db/bazdb.log.
4 objects saved. 3 objects not saved.

And yet, I can look at the contents of the files with DSPF or even EDTF.

The files are being used by a webapp context running in a Tomcat server,
in connection with Google Integration, and I'm told that they are HSQLDB
files (why we are using HSQLDB and not DB2/400, I don't know).

Any insights as to what's going on?

--
JHHL

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.