× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: What About Price vs. Performance?
  • From: "Leif Svalgaard" <leif@xxxxxxxx>
  • Date: Thu, 5 Apr 2001 08:01:16 -0500

From: Joe Pluta <joepluta@PlutaBrothers.com>
> Leif, how are you measuring your time?  Are you including the database I/O?
> My screen requires about 20 database reads to build it.  So let's compare
> apples to apples.  If you're doing 20 DB reads and formatting a screen in
> 1.35msec, you're doing quite well.  Then again, I would ask how you
measured
> it, and let me repeat your results.
>
> I got MY results by doing a WRKSYSACT, recording the elapsed CPU of the
job,
> paging up and down 200 times, and rerecording the elapsed CPU.  How did you
> measure your time?
>
> There are lies, damn lies and statistics.

there are lies, damn lies, statistics and performance measurements!

No, I did not not include DB I/O as I thought we were comparing
(not them apples, but) HTML versus 5250 datastream *generation*.
Presumably the DB I/O would be the same for both and should be
factored out. If I take your quotes (if don't know where they fall
on the scale from lies, damn lies, statistics, and performance
measurements):
HTML  100 msec
5250      20 msec

Then the DB I/O has to be less than 20msec, leaving at least 80
msec for the HTML generation. One could argue that the pure
DSP I/O time should be factored out as well, although
the larger volume of data for HTML might be important.

How did I do it: I changed my screen handler to generate the same
green screen 100,000 times, then measured that time with a
real watch (it took 135 seconds). Since I'm alone on the machine
that time is the most I can get out of it. Trying to count how many
machine cycles it took it futile because of CFINT, general overhead,
etc. What counts is the *real* wall-clock time something takes.
The CPU time proper might be less than 1.35msec but if I can't
take advantage of that, what do I care?




+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.