|
Ah ha, a System Value question! When the System/38 was shipped, one of the ways that IBM theorized that they could throttle performance was with the System Value QMAXACTLVL. But IBM never shipped a System/38 large enough to have 100 jobs concurrently active. By the time the AS/400 got large enough to support this, it because immediately that this was a lousy metric. So when you installed V2R2, this system value was changed from the old default of 100 to *NOMAX. The problem was that certain systems never had the change take. It's not clear who to me, possibly only the default changed. The word is out clearly, change this system value to *NOMAX. Al Al Barsa, Jr. Barsa Consulting Group, LLC 400>390 "i" comes before "p", "x" and "z" e gads Our system's had more names than Elizabeth Taylor! 914-251-1234 914-251-9406 fax http://www.barsaconsulting.com http://www.taatool.com http://www.as400connection.com Martin Rowe <dbg400.net@gmail .com> To Sent by: Midrange Systems Technical midrange-l-bounce Discussion s@xxxxxxxxxxxx <midrange-l@xxxxxxxxxxxx> cc 12/06/2005 10:04 Subject AM QMAXACTLVL and system performance Please respond to Midrange Systems Technical Discussion <midrange-l@midra nge.com> Hi folks We've just had a repeat of an interesting problem - our production iSeries[1] partition 'froze' for about 15 minutes, then carried on as if nothing had been the matter. We had this two weeks ago (but a bit earlier in the day) and from gaps in logs & response times analysis it has happened at other times too. We couldn't find anything obvious last time, and IBM just suggested we try to get into DST to force a storage dump. Except that when the system is frozen not even the console works. This time I noticed a CPF0908 - 'Machine ineligible condition threshold reached' in QSYSOPR's messages which refers to QMAXACTLVL. The value on our system (140) has been migrated during the previous upgrade (and probably several prior to that). Other partitions (for WebSphere, mirroring, thin primary etc) that were created from scratch all have the default *NOMAX. >From IBM's documentation it seems that this is probably too low, but it would (?) have been set for a reason (at some time in the dim & distant past). I can't see any downside to making this *NOMAX - are there any? We've upped it to 500 pending further discussion. On our development partition that runs WebSphere it was set down to 50 and changing to *NOMAX has improved performance; anecdotally at least. Oddly enough not everything was frozen. NFS shares to my Linux box worked fine but NetServer shares to Windows stopped. I'm wondering if part of the problem was the system TCP/IP server also suffering from the low thread setting (NFS is UDP based, as are pings which worked as well). Regards, Martin [1] i825, V5R2, up to date on PTFs -- martin@xxxxxxxxxx AIM/Gaim: DBG400dotNet http://www.dbg400.net /"\ DBG/400 - AS/400 & iSeries Open Source/Free Software utilities \ / Debian GNU/Linux | ASCII Ribbon Campaign against HTML mail & news X -- This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options, visit: http://lists.midrange.com/mailman/listinfo/midrange-l or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.