|
Hi everybody, customer has performance problems with his model 720, proc.#2062, running V4R4. Last Cumulative and DataBase group PTFs applied. They have quite a high load, mainly in the evening, when many of their accountants run (simultaneously and interactive...) a high load application. Well, the load is, more or less, the same every evening. AS/400 keeps up fairly well everyday... except some few evenings... when it gets down to its knees... We decided to run PerformanceTools, and so they would start collecting data with TRACE(*ALL) when the problem comes up. I've taken a look a the various performance reports, and I'd like to put forward some questions. Data with trace was gathered for a 20 min. time frame (four 5 min intervals) during the problem situation. - Disk activity, pools paging and transitions, etc look all right: - Seize and lock contentions have low values (not to cause high influence for the timeframe under study) - CFINT01 shows as "eating up" to a rough average of 38.8% of CPU. Question-1: I thought CFINT01 was discontinued by now, or may be was it in V4R5? Anyway, could it happen that CFINT01 would "come in place" just some few evenings (but not all of them) even though the overall load is similar for all evenings? - The only thing I found "strange" so far has to do with "exceptions". The Components Report shows most of the jobs as having an "ArithmeticOverflow" count of 0 (I guess "normal"...), but some jobs have a very high value, around the 20,000 to 40,000 figure. The final "resume" of exceptions show a total value (for the 20 min. time window) of - Size exceptions : over 291,000 - Decimal Overflow : over 292,000 Question-2: Perf.Tools documentation shows samples with both values to 0 (I guess normal). - Are both figures, over 290,000 , for the whole 20 min interval "dramatic"? - What does "size exceptions" stand for? What's its real meaning? - Could the DecimalOverflow have someting to do with "ignore decimal data error"? (the old aplication was S/36 type, but nowadays I'm told the whole application has been re-writen in COBOL, and DB files converted, so I guess there should not be any "decimal data errors"... at least in such a high number! ) - if that's not the case, what's the meaning of Decimal Overflow exceptions and what could be the reason for this annormal high value? TIA -- Antonio Fernandez-Vicenti afvaiv@wanadoo.es +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.