|
James I'm catching up - remember that we're talking about many arms, so objects of sufficient size are spread across multiple disks. This gives good performance because of the inherent parallelism that gives. "Sufficient size" means, probably, over 1 meg. The largest piece of an object that will reside on one arm is 1 meg, the maximum size of an extent. Smaller objects may use smaller extents, I believe. Now in DOS and Windows, you get fragmentation, because objects are all over the disk as they get changed, so performance goes down. NTFS is supposed to have less trouble with this, IIRC. And the EXT2FS of Linux (is that still used?) is supposed to be pretty good here, too - it cleans up after itself. Regards Vern > On Tue, 29 Oct 2002, Hans Boldt wrote: > In addition to this it seems to me that there is another performance > problem with single level store. I don't know this for sure but it seems > right. If an object in main memory needs to be pushed back to disk > because memory is low, it is written back to its original location on > disk. Then when it is retrieved back into main memory it is read again > from that spot on the disk. This is ineffecient use of disk, as these > reads/writes will be all over the place instead of in a closely clustered > area of disk. > > James Rich
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.