|
Charles, great link. Got any more on IO subsystems? Thanks! RH "Wilt, Charles" <CWilt@xxxxxxxxxxxx> wrote in message news:F520B5C51DB10041B239BC06383A7EDC01B421BC@xxxxxxxxxxxxxxxxxxxxxxxxxx > > -----Original Message----- > > From: midrange-l-bounces@xxxxxxxxxxxx > > [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Ryan Hunt > > Sent: Friday, March 31, 2006 2:14 PM > > To: midrange-l@xxxxxxxxxxxx > > Subject: Re: RGZPFM > > > > A couple people have refered to proper data balance across > > striped volumes > > as synomous with contigous clusters/sectors on an individual > > disk (or at > > least that's how I read it). > > > > My disks are perfectly balanced (or at least I desire no better > > balancing)...that's not what I was going for. Someone also > > stated that HD > > heads will be all over the place so there would be no advantage to > > contiguous disk usage. I disagree with this. > > > > Having data spread contiguously on disk will promote > > sequential file reads > > instead of many non-sequential reads with latency lags for the same > > non-contiguous file. Sure, there will be interrupts to > > service many user > > requests, but wouldn't interrupts + non sequential file reads > > exacerbate the > > issue? > > Nope. > > Sequential reads are great for performance, but you can get great > performance reading a file sequentially without having the file in > contiguous blocks on disk. > > How? Simple, the record size is smaller than the block size. When you > read the first record, the OS moves a multi-record block into memory. > You won't need to hit the disk again until all the records in the block > have been read. In comparison, with random I/O to a file, it's > theoretically possible that you'd need to read a new block for every > record; though caching and the iSeries single level store greatly > minimizes repeat transfers of a given block. > > Have the file's blocks contiguous on disk isn't going to help you. In > fact, it would probably hurt performance. Since the iSeries is a > multi-user server, in between the time your application requests the > first block and then requests the next block, the disk heads will have > moved. > > On Windows/Linux/UNIX servers using RAID, defragging helps but not > because the physical disk blocks have been made contiguous (a quick > google shows up more info but a couple of quotes:) > http://www.raxco.com/products/perfectdisk2k/whitepapers/pd_raid.pdf > > "In the case of RAID or SAN, it is indeed likely that a file that > defragmentation software makes logically contiguous will be physically > fragmented by the disk controller software. This is OK. The user > benefits here in two ways. First, the file system is able to access the > file in a single logical I/O taking the minimum elapsed time and using > scant CPU and memory. Second, the file is in the best possible place for > physical access according to the disk controller's intelligence." > > "The benefit of disk defragmentation comes not from locating physical > clusters on the disk but from reducing file system overhead before any > physical disk I/O occurs. Once this concept is understood, the need for > defragmentation in large RAID or SAN environments becomes obvious." > > As far as I understand it, the iSeries with its single level store, is > not susceptible to file system fragmentation like the other OS's. > > > > > > I once had an IBM tech suggest using the CPYLIB / RSTLIB to > > re-spread large > > libraries across disks because RSTLIB will rebuild each file > > one at a time > > and keep data contiguous at the disk level. In fact, if you > > really wanted > > to put the time in, you could copy files/tables one-by-one > > with your largest > > (or most likely to be scanned) tables first so the are both > > contiguous and > > as close to the outer edges of the disks as possible. > > > > Don't know who you were talking to, but they were wrong. The iSeries > automatically spreads out files across all disks in the ASP. You'd > never get all the parts of a large file on a single disk. > > HTH, > Charles Wilt > > -- > This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list > To post a message email: MIDRANGE-L@xxxxxxxxxxxx > To subscribe, unsubscribe, or change list options, > visit: http://lists.midrange.com/mailman/listinfo/midrange-l > or email: MIDRANGE-L-request@xxxxxxxxxxxx > Before posting, please take a moment to review the archives > at http://archive.midrange.com/midrange-l. > >
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.