× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Thanks for your very detailed email. I probably wasn't clear as to what I
thought I could do since I am not an IBMi specialist, my primary expertise
being in z/OS.

I just manage the technical relationship with our IBMi support vendor since
I can understand the underlying infrastructure. For example I did all the
FC zoning config for the FC cards to connect to our FC director and then to
the SAN and VTL but the IBMi work was done by our support organisation and
the FC stuff by another vendor.

But I will certainly forward your information to the support vendor since a
move to VIOS without any outage is a good thing.

On Sun, Aug 28, 2022 at 12:23 AM Marc Rauzier <marc.rauzier@xxxxxxxxx>
wrote:

Le 26/08/2022 à 22:04, Laurence Chiu a écrit :
I keep thinking of doing that since I have run out of slots to install
any
new NICS or FC adapters into my P8. And out IT support vendor keeps
telling
us to move to VIOS. But as the production box I don't know how long it
would take to completely redo the OS setup and go VIOS.

I believe there is a way (to be reviewed and validated by your support)
to proceed live and without activating the DRP, but what do you mean
with "completely redo the OS setup"? Do you mean reinstalling IBM i and
restoring all your data on the IBM i partition? If yes, and if I
understand properly your P8 setup, there is no need to do so, if you use
NPIV (N Port Id Virtualization).

Assuming IBM i partition disks are V5030 volumes acceeded through
redundant physical FC adapters and SAN switch, and assuming that dual
VIOS is in place (that would be the tricky part of the project depending
on the P8 hardware configuration as you need, at least, two disk
controllers, either SAS or FC, one for each VIOS), here is an overview
of the disks attachment migration steps.

Let's assume you have two physical FC adapters (pFC1 and pFC2) providing
multpath to V5030 volumes through a SAN switch. You use one of the ports
of pFC1 (with its pWWPN1 address) and one of the ports of pFC2 (with its
pWWPN2 address). Both pWWPN1 and pWWPN2 addresses are zoned to the V5030
FC ports addresses and are known by the V5030 and assigned to the volumes.

First of all, from the HMC, duplicate the current IBM i partition
profile, to be used in case of backout.

From the HMC, ensure that VIOS and IBM i partitions are set for
profiles synchronisation. This ensures that all dynamic configuration
changes are populated to the current profile in use.

The principle here is to move pFC1 to the first VIOS, check that
everything works fine, then move pFC2 to the second VIOS, check that
everything works fine.

From the HMC, create a pair of virtual server (on VIOS side) and client
(on IBM i partition side) FC adapters on each VIOS. Your IBM i partition
will get two new FC adapters. Let's name them vFC1 and vFC2. Each one
will have new assigned addresses, vWWPN1a, vWWPN1b, vWWPN2a and vWWPN2b.
"b" WWPNs are to be used in case of Live Partition Migration (live move
from a server to another).

Prepare zoning and V5030 host definitions with the new addresses.
vWWPN1a and vWWPN1b to be in the same zone and host definition as
pWWPN1. vWWPN2a and vWWPN2b to be in the same zone and host definition
as pWWPN2.

From the HMC, with a dynamic LPAR operation, move pFC1 adapter from the
IBM i partition to the first VIOS. At this time, there will be messages
in QSYSOPR message queue and in system history log stating that some of
the paths to volumes are lost. You loose redundancy but the partition is
still running.

The SAN port which is connected to the physical port must be set to
allow NPIV. You can check that with lsnports VIOS command. In the first
VIOS, configure the new adapters (virtual which was created and physical
which was moved from the IBM i partition) with cfgdev command

Still in the first VIOS, map the virtual server FC adapter to the same
(the one which was used by the IBM i partition) port with vfcmap
command. From the IBM i partition, use SST against the virtual FC
adapter (at IOA level) to check the status of the connection to the SAN.

On the SAN switch, update zone configuration for vWWPN1a and vWWPN1b. By
default, the light is not switched on for vWWPN1b and therefore, it is
not available on the SAN switch. If it is mandatory for this light to be
on, you can switch it on from the HMC. From the IBM i partition, use SST
against the virtual FC adapter (at IOA level) to check the status of the
connection to the V5030.

On the V5030, update host definition for vWWPN1a and vWWPN1b. At this
time, disks paths redundancy becomes in place again. New path through
VIOS replaces the old path through the physical adapter. Messages in
QSYSOPR and system history log will confirm.

From the HMC, switch vWWPN1b light off if it was switched on previously.

From the HMC, configure the IPL source of the IBM i partition to use
the new virtual FC adapter in place of the physical adapter.

Repeat the steps for the second physical FC adapter pFC2 and the second
VIOS, from the dynamic LPAR operation to V5030 reconfiguration. Finally,
through SST, run MULTIPATHRESETTER macro to reinitialize the multipath
setup (this may speed up subsequent IPLs).

All the operations above are dynamic and do not require any outage. You
only have a short loose of redundancy, so those steps might be scheduled
in a time window with a low impact in case of issue.

At a convenient time, you will have to power down the IBM i partition
and activate it to ensure that the IPL source is properly set on a
virtual FC adapter.

In case of issue, shutting down the VIOS for which the steps are in
progress, and dynamically assign back the physical FC adapter to the IBM
i partition should work, as zoning and V5030 setup are still in place.
If the IBM i partition looses all disk access (most probably, it will
raise an A6xx-0255 or similar SRC code), you must try to avoid shutting
down the IBM i partition, as when the disks come back, it will continue
running where it was paused. But, if all your tries fail, and powering
down is mandatory, you can activate the partition using the backup of
the profile which was taken at the first step.

When everything is moved to the VIOS, cleaning operations can be done.

You will have to update SAN zoning and V5030 host definition to remove
any reference to pWWPN1 and pWWPN2 addresses.

On the IBM i partition, through SST, you will have to remove all
resources related to physical FC adapters.

Note that, if you use the same physical FC adapter (probably on a
distinct port) for tape access, you will have to run similar steps at
the same time.

Those steps are for the disk (and tape) access only. For the network,
this is another story which depends a lot on the current setup
(redundancy with link aggregation or with virtual IP addresses) and the
VLAN usage. However, I would start to set up the network first, because,
in the past in some circumstances, I had to shut down both VIOS at the
same time to resolve SEA fail-over configuration issue.

I guess I do have one saving grace. Both production and DR are running
off
V5030 SANs with real-time replication so I could take a small outage,
flip
over to DR and run there while we rebuild the prod box. Then when prod is
back, replicate back and startup prod again so minimal downtime.


On Fri, Aug 26, 2022 at 11:26 PM Rob Berendt<rob@xxxxxxxxx> wrote:

We're doing the VIOS connection. Works great.

Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 7310 Innovation Blvd, Suite 104
Ft. Wayne, IN 46818
Ship to: 7310 Innovation Blvd, Dock 9C
Ft. Wayne, IN 46818
http://www.dekko.com


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.