× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 28 Feb 2013 07:48, DrFranken wrote:
I created a 10G File in QTEMP with CRTDUPOBJ and that took just
under 5 minutes. That would certainly overload the 2G memory they
have. Clearing QTEMP took no time and signing off with that object
in there also took no time either.

What the users have in QTEMP is a whole set of file (At least a
dozen) with PFs, LFs etc but I don't think they have any significant
quantity of data in there. The files get created when they go into
System-21 and stay there until they sign off.

And very possibly, the issue is only for /database/ objects; i.e. if there were several hundred *USRSPC, the job would also end effectively immediate?

The V5R1 may have been the release with several iterations of /fixes/ attempting to deal with QTEMP objects at job termination, specifically for performance issues; how [and when\where?] CLRLIB occurs for ended jobs. There was at one time, for example, when QTEMP library objects were actually created with /temporary storage/ instead of permanent storage; while that may seem OK, that was very bad for the database, and so that LIC context type had to be changed back to be permanent objects. So anyhow...

While there were no PTFs applied for this IPL, there may have been PTFs applied in the past for which this IPL /activated/ a change via such a PTF. But almost anything IBM had made visible on the web about v5r1 is long gone, so all I find are v5r3 references.

As for the performance statistics reviewed, as described in the OP, the issue may be /waits/ rather than a volume in seize\lock, or perhaps caused by excessive async write activity for an environment that could seemingly be less about writing than reading. Waits would be consistent with low utilization rates reported for the hardware. So for example, from a software perspective rather than hardware:

The following issue is v5r3, but shows how, what the *DBXREF does may be an issue for performance, as well as that there may be contention on the *USRPRF named QDBSHR:
http://www.ibm.com/support/docview.wss?uid=nas33a61f3528e6e23fd8625708300478cc3
"... Contention across many jobs having large number of database *FILE objects in library QTEMP, during job end."

What status are the jobs in when ending so slowly? While doing periodic refreshes while watching the ending job on WRKACTJOB, does the status of the jobs often show DEQW perhaps? And watching the program stack, does that show the processing sitting at some specific instructions in QDBDLTFI? If so, those are likely waits on queued activity or could instead be instructions as origin for contention on a shared object [such as QDBSHR].

I recall creating a PTF in v5r1 to improve throughput for the *DBXREF, if by chance, the issue is for waits in QDBDLTFI processing are identified as an issue [per chance contention on the QDBXREFQ object [type x/0AC4 I believe]. It probably would be the last PTF, but possibly not on the last cumulative. Some DMPSYSOBJ requests taken of that queue object during a slow-ending job might show many entries that are not clearing rapidly, and that would likely be an indication of excessive [and unnecessary] activity burdening the *DBXREF which then impacts other user jobs similarly.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.