|
Rob,--
I think it’s worth pointing out that you are using two different
virtualization techniques. VIOS and IBM i hosting together, and that
each has a very different role.
IBM i is only hosing storage in this case. Much better than VIOS for
internal storage. That’s where the NWSDs come in.
VIOS, in your case is only virtualization Ethernet and tape library
(in your case a VTL).
That’s important to note since the VIOS configuration in your case is
quite simple DR can be as simple as rebuild. If someone (Like for
instance Agile’s equipment) has VIOS hosing not only Ethernet and
Peripherals, but also storage from a SAN, then the configuration is a bit more robust (read:
complex) therefore a DR plan of rebuild may not be appropriate in that
case.
Learning the AIX commands to backup and restore VIOS is a bit
different than IBM i but once you catch on, it’s not that bad. I will
admit the learning curve for IBM i folks might be a bit steeper than
others because IBM i is so dang easy.
Jim Oberholtzer
Agile Technology Architects
On Feb 4, 2020, at 6:40 AM, Rob Berendt <rob@xxxxxxxxx> wrote:say this...
I don't know the size of your system nor your HA plans. However I
will
We have a system with only 4 drives in the system unit. One pairmirrored for one vios lpar. Another pair mirrored for the second vios lpar.
It has one lpar of IBM with 23TB of internal SSD's in expansion units.That lpar is a hosting lpar only and hosts the disk for four other
lpars of IBM i and one of AIX. Using the hosting lpar concept allows
all guests to use all the disk 'arms' and not have to throw money at
numerous disk controllers per lpar.
runs INFOR LX for our ERP. They pointed out a few issues but in
Last week I had IBM connect and do a performance lpar on the main
guest,
general there were only two, very minor, issues with disk. One was
there was too much temporary space which can impede the use of SQL
plan cache. They suggested more frequent IPLs. I plan on using the
DB2 services for analyzing temporary space instead.
The other disk issue needs a little background information for thoseunfamiliar with IBM i hosting IBM i disk. On the hosting lpar you
configure one or more NWSD's (Network Storage Device) per lpar. Each
NWSD contains 1-16 network storage spaces. A network storage space is
stored in the IFS and is automatically spread across many disks. This
is how each lpar shares all the disk arms. The guest lpar they
analyzed has four NWSD's going to it. One NWSD is only for devices
other than disk. The other 3 NWSD's host 10, 11, 9 storage spaces.
Using multiple NWSD's and multiple storage spaces can be used to help
performance. The guest thinks of the storage spaces as disks and you
actually go into disk stuff in SST to add them. Thus it thinks of
them as arms and adjusts performance accordingly. The guest thinks of
the NWSD's as individual disk controllers. Sometimes you don't create
multiples initially but you do it as the system grows. Initially most
of our stuff was due to growt
h. But after a few system upgrades we've gotten a better handle onand preplan for multiple storage spaces spread out over multiple NWSD's.
it
Even so, IBM has determined that one of those NWSD's (which they saw
as a disk controller) is busier than the others and would benefit from
running the TRCASPBAL and STRASPBAL TYPE(*USAGE) pair.
analyzed and it's running pretty fat, dumb and happy. Slinging a pot
So, in summary, we use internal disk and host 7.8TB to the guest IBM
of money at external SAN may not be a magic cure all for performance.
There are a lot of other arguments for using SAN outside of performance.list
Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe,questions.
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related
link: https://amazon.midrange.com
Help support midrange.com by shopping at amazon.com with our
affiliate
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To
subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related
questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.