MIDRANGE dot COM Mailing List Archive



Home » MIDRANGE-L » April 2014

RE: 70GB load source and 5 other drives



fixed

rob@xxxxxxxxx wrote on Fri, 25 Apr 2014 12:49:23 GMT:

You CAN do just 1. I have. Those lpars are also very slow.
Where do you see that your bottleneck is the lack of LUNs in
performance tools? I haven't a clue. I'm just standing on
the shoulders of giants.


There are threads created in DB tasks, storage management tasks,
journal tasks, etc. The thread count is based on the number of
disk units (LUNs, HDDs, SSDs, etc). If you only have 1 disk
unit, there is only 1 thread which results in poor performance.

You could get away with 2 units but 2-5 units do not allow
enough threads to get started for decent save and restore.

Six (6) or more units gives a good number of threads for all
tasks which are multi-threaded based on the number of disk
units.

I will add the caveat this applies when you aren't starting with
an environment with "lots" of small (35G or less) HDDs. What is
the defintion of "lots"? As we always say for performance "it
depends". :)

Rob, specific to your question about how to set up your new
hosted LPAR. Do you need really good save/restore speeds? If
so, then a NWSSTG of at least 75302 on the hosting LPAR will
cause the client LPAR to see a device of a little over 70G. The
remaining NWSSTGs can be any size you want.

I don't have time to test, but I'm thinking it may be irrelevant
to have a large load source with many smaller units when all the
storage is hosted. Simply because the NWSSTGs are spread across
all the backing disks and should not hot spot. Yes, in the
client LPAR the load source will be busier, but it really
shouldn't matter.







Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2014 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact