× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



> A.    43.4 seconds elapsed 3.0 seconds CPU time.
> B.    20.4 seconds elapsed 2.6 seconds CPU time.
> C.    4 seconds elapsed 2 seconds CPU time.
> D.    210 seconds elapsed 89 seconds CPU time.
>
[SNIP]
> Can anyone explain why C is better than A and B?

Without seeing your code, it's a little difficult.

Here's what I'm thinking, though, based on assumptions of how the code
works:

1) You're reading the entire file in all cases.  As long as the amount
     of data you're transferring exceeds the minimum amount of disk
     that's normally transferred to memory, the speed should be about
     the same for this part of the operation.

2) A user-space is a disk object.   Sure, if it gets a lot of accesses,
     the operating system will try to bring it into memory to make it
     faster, but that's not going to happen instantaneously.   Also,
     the size being so large probably has an impact on whether it
     brings it into memory.

3) ALLOC'ed memory is in RAM.   Sure, it may be paged out to disk if
     the memory is required by other processes, but since you're doing
     constant access to it, the OS will try not to page it out if it
     can help it.   Since it's so large, though, there's a good chance
     that it will have to page out at least part of it.

4) The 32k buffer is almost definitely in RAM because it's small, so
     easy to keep on-hand, and you're using it constantly.   Also,
     because you're using less memory, you're not slowing down other
     processes by taking up all of the RAM.

5) The small-data-at-a-time reads are going to be slow because it has
     to do more disk transfers to get the same amount of data.

> I wonder whether these results would be repeated on an empty machine
> (mine wasn't - no such luxury).

The results might have smaller times, but I think you'll get the same
results.  I'm assuming that all you did was read the file...  I don't see
why reading it all in to a large block of memory would ever be faster
than reading it into a small block of memory, and then re-using that
memory every 32000 bytes.  They should be the same -- or the big
buffer will be slower if paging is necessary.

> I wonder whether A and B are getting paged out while the I-O is
> occurring while C isn't because it's doing more CPU intensive work
> (multiple reads).

Why would that be more CPU intensive?!   Either the read() routine is
looping loading blocks of data, or your routine is looping.   From the
CPU's perspective, a loop is an IF and a GOTO.. I don't see why one would
be more CPU intensive than the other.

Are your benchmarks parsing the file into "lines" (CRLF delimited
records?)   That would be the real test...



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.