|
Sue, Thanks for your response. The details of the disk configuration are: 2 - 4326 35GB 15K drives attached to the base 570B I/O controller in the CEC (Mirrored Load Source) 2 - 0595 PCI-X Tower Units (Drawers) 2 - 2780 PCI-X Ultra4 I/O Controller 18- 4328 141GB 15K drives The 18 141GB drives are divided into mirrored sets of nine drives with each set having its own controller and drawer (i.e. mirrored at the tower level). The total disk configuration capacity is 1305 GB. The initial reason for reviewing the configuration was due to the uneven "% Used" reported by WRKDSKSTS for the load source drives versus the other drives. WRKDSKSTS was reporting 92% for the load source drives and 73% for the rest of the drives. After performing the STRASPBAL *MOVDTA the "% Used" for the load source is 15%. There have been no performance issues as of yet but we are in the process of an ERP upgrade so the system is not in full production mode either. I'm just looking to head off any issues before they become problems. Questions/Comments in-line. On 2/7/06, Sue Baker <smbaker@xxxxxxxxxx> wrote: > The load source unit stores much more than read-only IPL specific > data and needs to fully participate in the system ASP. Can you elaborate on this statement or point me to a reference where I can learn more? > You didn't specify whether you have only 2 35G drives or 4 behind > the base DASD controller. You also didn't specify whether you > have the 5709 or 5726 write cache plus RAID enabler daughter > card. If you don't have this card, you really should consider > getting one to improve the performance of mirroring for your > drives. You can also look into the newer replacement features > that were announced 1/31 that offer similar write cache, RAID > capability, and performance. Working with the BP and IBM on developing the DASD configuration for this system it was decided that since the base I/O controller would only be supporting the 2 mirrored load source drives and the optical drive that performance of the base I/O controller would not be an issue. Plus, since we are mirroring, the RAID capability was not a requirement. > What we frequently see is that the larger drives get most of the IO > workload and the smaller drives next to none because the disk > requests "follow the GBs". And in your circumstances, would be > the desired behavior. This is what I expected to see also but as I stated above, the small load source drives were at 92% whereas the large drives were only at 73%. > To get back to your questions, if you attempt to utilize > something like independent or user ASPs to isolate load source, I > think you'll quickly find that 2 or 4 drives just isn't enough to > support all the disk requests that are needed even through > there's more than enough gigabytes available. If you attempt to > run with the load source and its mirror in an *ENCALOC state, I'm > pretty sure you'll end up with all sorts of odd errors from i5/OS. No, I don't want to use seperate ASPs due to the number of disk arms I currently have available. So, from what you've stated, I should forget about trying to remove the load source from the performance mix and go ahead and run STRASPBAL *USAGE. The system should balance the data across the drives based on the disk arm activity which should provide the best performance for my configuration. Correct? Thanks again for your assistance. Kind regards, Brian
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.