| 
 | 
With all due respect what you are going to do is to test experimentally a
well-known result of a queueing theory:
T = S * QM, where QM = 1/(1-U)
T is elapsed time to process the request (response time), S is service time -
CPU time needed to process the request.
QM is Queueing Multiplier, U is CPU utilization by processes with equal or
higher priority.
This formula is somewhat more complicated for more than one processor.
I would expect that results of your measurements will plot nicely the curve
described by formula above.
BTW. This curve is exponential and it has a "knee" about 0.7 (70%) utilization,
when it becomes to sky-rocket.
This is why normal recommendation to keep interactive response time at bay is to
keep CPU utilization of jobs with interactive and higher priorities below 70%.
PS. Of course, there are more factors influencing performance than just queueing
for CPU (record locks, contention for disk arms etc), but in most cases it
works.
Best regards
    Alexei Pytel
Pete Hall <pbhall@execpc.com> on 08/12/99 08:54:14 PM
Please respond to MIDRANGE-L@midrange.com
To:   MIDRANGE-L@midrange.com
cc:
Subject:  Re: Recommended CPU % utilized
At 08:10 08/12/1999 , Henrik Krebs wrote:
>message: I think there can be no max. recommended CPU % utilized.
>On a well-tuned system the CPU% should be near 99% (well - in a mixed
>environment it will not come that close) (and fall to zero when there is
>nothing to do). Upgrading this mch should result in 99% again, though in a
>smaller time.
>The reason for upgrading should be the total through-put. If it's too low
>there are many places to look, mainly Disk busy% and memory pagefaults.
>The reason for looking at a faster CPU should be that Disk busy% doesn't
>say 'Buy more arms' and pagefaults doesn't say 'buy more memory'.
Exactly. We have low swap rates and reasonable disk access stats, but we
are CPU constrained. I have tuned the system to protect interactive
response. Consequently, I'm looking at batch throughput degradation. Today
I wrote a program that performs a CPU intensive benchmark test every 30
seconds and writes a timestamp and elapsed time to a file. I will then
compare this with the CPU activity % value for the time interval and plot
the elapsed time against the % busy. Hopefully, this will reveal a
degradation curve that I can use to demonstrate throughput loss due to lack
of CPU power. I have an analytic boss, who is also the CFO. I would be
happy for any other suggestions...
Pete Hall
pbhall@execpc.com
http://www.execpc.com/~pbhall
+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---
+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.