From: Trevor Perry
The CPU% being high is not a measure of good or bad performance.
I agree. You can still get good response, even though CPU utilization may be high. I performed a stress test against some of our Web applications recently, using HP Loadrunner to measure and test performance. It was enlightening to see that most requests completed in under 15 milliseconds, even though our single-core server was pounded with about 400 requests per second and CPU was at 80%. However, our Web applications run under IBM i - not Windows.
the numbers of the prestart QZDASOINIT jobs may be configured very
Mike Cunningham made that point too.
and when a new job is needed, the system has to spend some
resources creating that new job.
But if the number of prestart DB servers is too low, that may only explain a temporary performance lag, until more jobs are started, which is automatic.
Brian indicated that CPU utilization gradually crept up over a couple year period as new applications were added. That sounds normal for servers - reflecting additional workload. A continuous high-rate of page faults could indicate a memory problem, but he didn't say that. Maybe this is a Windows problem. When workloads are distributed across multiple server tiers, it's harder to get a handle on the problem.
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives