× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Roberto:

We too tend to configure boot from SAN, and that does help the DR aspect of VIOS no doubt. Scripting your configuration is a great thing too, but I'll bet it took you some time to get it all correct. HMC command line is not the easiest until you understand the name value/pairs set up, and then you have to get past all the options that either don't matter in the configuration you are doing or are optional.

Even still, most auditors will require that you show them how to recover given any technique. They love tape. That also can (if configured properly) airgap the backups from the system to avoid unpleasant situations with ransomware. Flash on the SAN won't quite do that.

--
Jim Oberholtzer
Agile Technology Architects

-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of Roberto José Etcheverry Romero
Sent: Tuesday, February 4, 2020 9:37 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx>
Subject: Re: Multiple LPARs PowerVM and internal drives

Jim,

When you get to vFC on VIOS the configuration is as simple as the ethernet/VTL, or very close. I document my installs in such a way as to be able to re-create them with a simple script (that is also why I hate the new HMC GUI, I had to script the whole PowerVM LPAR creation process just to keep my devices matches between VIOSes).
Besides, if you have the rootvg living in the SAN as well you can take snapshots of that for backup... (I've gone diskless on all the P9s I sell lately, is not that hard to install to SAN and I haven't had a case where having the rootvg but not the disks would help me).

Roberto

On Tue, Feb 4, 2020 at 10:30 AM Jim Oberholtzer <midrangel@xxxxxxxxxxxxxxxxx>
wrote:

Rob,

I think it’s worth pointing out that you are using two different
virtualization techniques. VIOS and IBM i hosting together, and that
each has a very different role.

IBM i is only hosing storage in this case. Much better than VIOS for
internal storage. That’s where the NWSDs come in.

VIOS, in your case is only virtualization Ethernet and tape library
(in your case a VTL).

That’s important to note since the VIOS configuration in your case is
quite simple DR can be as simple as rebuild. If someone (Like for
instance Agile’s equipment) has VIOS hosing not only Ethernet and
Peripherals, but also storage from a SAN, then the configuration is a bit more robust (read:
complex) therefore a DR plan of rebuild may not be appropriate in that
case.

Learning the AIX commands to backup and restore VIOS is a bit
different than IBM i but once you catch on, it’s not that bad. I will
admit the learning curve for IBM i folks might be a bit steeper than
others because IBM i is so dang easy.

Jim Oberholtzer
Agile Technology Architects



On Feb 4, 2020, at 6:40 AM, Rob Berendt <rob@xxxxxxxxx> wrote:

I don't know the size of your system nor your HA plans. However I
will
say this...
We have a system with only 4 drives in the system unit. One pair
mirrored for one vios lpar. Another pair mirrored for the second vios lpar.
It has one lpar of IBM with 23TB of internal SSD's in expansion units.
That lpar is a hosting lpar only and hosts the disk for four other
lpars of IBM i and one of AIX. Using the hosting lpar concept allows
all guests to use all the disk 'arms' and not have to throw money at
numerous disk controllers per lpar.

Last week I had IBM connect and do a performance lpar on the main
guest,
runs INFOR LX for our ERP. They pointed out a few issues but in
general there were only two, very minor, issues with disk. One was
there was too much temporary space which can impede the use of SQL
plan cache. They suggested more frequent IPLs. I plan on using the
DB2 services for analyzing temporary space instead.
The other disk issue needs a little background information for those
unfamiliar with IBM i hosting IBM i disk. On the hosting lpar you
configure one or more NWSD's (Network Storage Device) per lpar. Each
NWSD contains 1-16 network storage spaces. A network storage space is
stored in the IFS and is automatically spread across many disks. This
is how each lpar shares all the disk arms. The guest lpar they
analyzed has four NWSD's going to it. One NWSD is only for devices
other than disk. The other 3 NWSD's host 10, 11, 9 storage spaces.
Using multiple NWSD's and multiple storage spaces can be used to help
performance. The guest thinks of the storage spaces as disks and you
actually go into disk stuff in SST to add them. Thus it thinks of
them as arms and adjusts performance accordingly. The guest thinks of
the NWSD's as individual disk controllers. Sometimes you don't create
multiples initially but you do it as the system grows. Initially most
of our stuff was due to growt
h. But after a few system upgrades we've gotten a better handle on
it
and preplan for multiple storage spaces spread out over multiple NWSD's.
Even so, IBM has determined that one of those NWSD's (which they saw
as a disk controller) is busier than the others and would benefit from
running the TRCASPBAL and STRASPBAL TYPE(*USAGE) pair.

So, in summary, we use internal disk and host 7.8TB to the guest IBM
analyzed and it's running pretty fat, dumb and happy. Slinging a pot
of money at external SAN may not be a magic cure all for performance.
There are a lot of other arguments for using SAN outside of performance.


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our
affiliate
link: https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To
subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.