× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Brian, hope this additional information helps.

Brian wrote on Wed, 08 Feb 2006 17:19:54 GMT:

> Sue,
> 
> Thanks for your response. The details of the disk
> configuration are: 
> 
>   2 - 4326 35GB 15K drives attached to the base 570B I/O
>   controller in the 
> CEC (Mirrored Load Source)
>   2 - 0595 PCI-X Tower Units (Drawers)
>   2 - 2780 PCI-X Ultra4 I/O Controller
> 18- 4328 141GB 15K drives
> 
> The 18 141GB drives are divided into mirrored sets of nine
> drives with each set having its own controller and drawer
> (i.e. mirrored at the tower level). The total disk
> configuration capacity is 1305 GB. The initial reason for 
> reviewing the configuration was due to the uneven "% Used"
> reported by WRKDSKSTS for the load source drives versus the
> other drives. WRKDSKSTS was reporting 92% for the load source
> drives and 73% for the rest of the drives. After performing
> the STRASPBAL *MOVDTA the "% Used" for the load source is 15%.
> 

This is one of the anomalies we've seen when either drives sizes 
are skipped or there are 4 or more drives sizes.  It can also 
result if when the system was first loaded only the load source 
mirrored pair was configured.  All of licensed internal code, 
QSYS, and IBM products are on the load source and its mirrored 
mate and not spread across all the available disk units.  
STRASPBAL usually does a pretty good job of fixing this and 
moving things across all available disks.

> There have been no performance issues as of yet but we are in
> the process of an ERP upgrade so the system is not in full
> production mode either. I'm just looking to head off any
> issues before they become problems. 
[snip]
> 
> Can you elaborate on this statement or point me to a reference
> where I can learn more?
> 

There are many things that get placed on the load source unit.  
I'm not aware of any documentation that contains a complete list.  
Some of the items are the PAL, SAL, etc. entries that are used 
for problem logging and determination.  See in SST the settings 
for the various logs and you'll get an idea for what these items 
are.  Also, work control block tables used for job structures 
have as their preferred location the load source.  And there are 
other things that escape the brain right now.

[snip]
> 
> Working with the BP and IBM on developing the DASD
> configuration for this system it was decided that since the
> base I/O controller would only be supporting the 2 mirrored
> load source drives and the optical drive that performance of
> the base I/O controller would not be an issue. Plus, since we 
> are mirroring, the RAID capability was not a requirement.
> 

Unfortunately, IBM really missed the boat when naming the 5709 
and 5726 features.  It's not only a "RAID enabler", but most 
importantly a write cache.  Any time you write to your 35G drives 
on the base disk controller, the physical disk op must complete 
before more writes or reads can occur.  I'm glad you're not 
seeing any performance issues, but this could show up in the 
future.  

[snip]
> 
> This is what I expected to see also but as I stated above, the
> small load source drives were at 92% whereas the large drives
> were only at 73%. 
> 
See my earlier comment and question about how you loaded your 
system.  But if we still use the theory of ops to disk following 
the GBs.  At 92% full, the 35G drives have only 32GBs or so that 
you'll be going after.  This is not a bad thing!  On the other 
hand, the 141GB drives have about 102GBs in use, so those 141GB 
drives will carry the bulk of your disk workload.  Again, this is 
not bad.  

> 
[snip]
> No, I don't want to use seperate ASPs due to the number of
> disk arms I currently have available. So, from what you've
> stated, I should forget about trying to remove the load source
> from the performance mix and go ahead and run STRASPBAL
> *USAGE. The system should balance the data across the drives 
> based on the disk arm activity which should provide the best
> performance for my configuration. Correct?
> 

While you could spend time running STRASPBAL *USAGE after running 
TRCASPBAL, I don't know how effective this will be in the long 
run.  The ASPBAL commands were really designed to help people use 
a user ASP that had a combination of high performing and low 
performing disk units (maybe even compressed disk) and help to 
keep data that is regularly access on high performing disks.  
While some people claim to see very good results, I've yet to see 
this when analyzing before and after performance data.  

Also, there's a cost associated with trying to micromanage disk 
in this fashion.  Not an actual check you or your company writes, 
but it's there in the time it takes to monitor the commands and 
potential impact to work you need to run while these commands are 
operating.  If I had a choice, I'd go for adding the write cache 
and forget about STRASPBAL.  I realize this does involve spending 
$s that may not be easy to justify to the business.  

In addition to adding the write cache, I think I'd consider that 
when you need to add more disk storage, swap those 35G drives for 
141G drives rather than fill in the holes on the 0595s.  i5/OS 
does a really good job (most of the time), in keeping the disks 
evenly full and this in turn usually keeps performance good.  
There are always exceptions, but generally this is very true.

Hope this helps and feel free to ask follow up questions.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.