× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Thanks. That is helpful. The application has at least 4 more years life to
it so we might be able to get by with 4 years life of 7.4. There is no
IBMi upgrade solution in the roadmap with the applications being targeted
for "modern" cloud based applications so buying a P10 in a few years might
not be a wise investment if we can keep soaking the P8.

On Tue, Aug 30, 2022 at 11:57 PM Rob Berendt <rob@xxxxxxxxx> wrote:

7.4 will be in support in 4 years.
Might have hit it's last TR by then. After all 7.3 hasn't had a TR since
9/21. So I'm guessing 9/24 for the last TR for 7.4. Should still be
getting cumes as 7.3 says: "The next cumulative package is scheduled for
18 Nov 2022."


Oh, and a P8 won't run 7.5, P7 won't run 7.4, P6 won't run 7.3 so a P9
probably won't run rNext after 7.5.

Some other historical items to put into your crystal ball:
7.2: Cumulative package C1084720 began shipping worldwide: 16 Apr 2021.
NO ADDITIONAL CUMULATIVE PACKAGES ARE PLANNED.
https://www.ibm.com/support/pages/node/6198560
6.1 hit end of support 2015, 7.1 2018, 7.2 2021 so if the pattern holds
then 7.3 2024, 7.4 2027. https://www.ibm.com/support/pages/node/668157
IBM promises a year of announcement before a release of IBM i hits EOS.

IBM has "Program support extension" for after EOS. However you will pay a
huge premium and it's likely that you still will not get vital stuff
needed. Prime example was 7.1 was under such support but couldn't get
newer ciphers needed to have SSL working to any current website. For a
bulk of the last holdouts on 7.1 this was the straw that broke that camel's
back.

Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 7310 Innovation Blvd, Suite 104
Ft. Wayne, IN 46818
Ship to: 7310 Innovation Blvd, Dock 9C
Ft. Wayne, IN 46818
http://www.dekko.com

-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of
Laurence Chiu
Sent: Tuesday, August 30, 2022 3:30 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx>
Subject: Re: [very long] Moving from physical FC adapter to VIOS (was: Re:
VTLs and backup time)

CAUTION: This email originated from outside of the organization. Do not
click links or open attachments unless you recognize the sender and know
the content is safe.


I wasn't aware of that. We are are running an application with an
anticipated life of 4 more years (year 2 of a 5 year planned use).I would
have thought we would have moved from 7.3 to 7.5 in that time (or even
higher) but if the P8 doesn't support 7.5 (our DR box is a P9) then we
might have to buy a P10 or see if we can live on 7.4 for the life of the
application.
As noted latter we have no internal disks anymore so it would not make
sense to use internal disks for VIOS.

On Tue, Aug 30, 2022 at 12:07 AM Rob Berendt <rob@xxxxxxxxx> wrote:

Another option, since you're running a Power 8 and that won't run 7.5, is
to order a P10 and have your vendor set that up the right way and just
move
to that.
You'll find out that using VIOS to virtualize the ethernet and the FC
will
significantly lower your costs because less cards, expansion units, etc,
will be needed.
Also, firing up a new lpar (test?) will be easy peasy with the
virtualization.
The biggest debate will be internal or external disk for your VIOS.
Based
on a recent thread a lot of it is personal preference on the part of the
company you have setting this up.

Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 7310 Innovation Blvd, Suite 104
Ft. Wayne, IN 46818
Ship to: 7310 Innovation Blvd, Dock 9C
Ft. Wayne, IN 46818
http://www.dekko.com

-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of
Laurence Chiu
Sent: Saturday, August 27, 2022 6:19 PM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx

Subject: Re: [very long] Moving from physical FC adapter to VIOS (was:
Re:
VTLs and backup time)

CAUTION: This email originated from outside of the organization. Do not
click links or open attachments unless you recognize the sender and know
the content is safe.


Thanks for your very detailed email. I probably wasn't clear as to what
I
thought I could do since I am not an IBMi specialist, my primary
expertise
being in z/OS.

I just manage the technical relationship with our IBMi support vendor
since
I can understand the underlying infrastructure. For example I did all the
FC zoning config for the FC cards to connect to our FC director and then
to
the SAN and VTL but the IBMi work was done by our support organisation
and
the FC stuff by another vendor.

But I will certainly forward your information to the support vendor
since a
move to VIOS without any outage is a good thing.

On Sun, Aug 28, 2022 at 12:23 AM Marc Rauzier <marc.rauzier@xxxxxxxxx>
wrote:

Le 26/08/2022 à 22:04, Laurence Chiu a écrit :
I keep thinking of doing that since I have run out of slots to
install
any
new NICS or FC adapters into my P8. And out IT support vendor keeps
telling
us to move to VIOS. But as the production box I don't know how long
it
would take to completely redo the OS setup and go VIOS.

I believe there is a way (to be reviewed and validated by your support)
to proceed live and without activating the DRP, but what do you mean
with "completely redo the OS setup"? Do you mean reinstalling IBM i and
restoring all your data on the IBM i partition? If yes, and if I
understand properly your P8 setup, there is no need to do so, if you
use
NPIV (N Port Id Virtualization).

Assuming IBM i partition disks are V5030 volumes acceeded through
redundant physical FC adapters and SAN switch, and assuming that dual
VIOS is in place (that would be the tricky part of the project
depending
on the P8 hardware configuration as you need, at least, two disk
controllers, either SAS or FC, one for each VIOS), here is an overview
of the disks attachment migration steps.

Let's assume you have two physical FC adapters (pFC1 and pFC2)
providing
multpath to V5030 volumes through a SAN switch. You use one of the
ports
of pFC1 (with its pWWPN1 address) and one of the ports of pFC2 (with
its
pWWPN2 address). Both pWWPN1 and pWWPN2 addresses are zoned to the
V5030
FC ports addresses and are known by the V5030 and assigned to the
volumes.

First of all, from the HMC, duplicate the current IBM i partition
profile, to be used in case of backout.

From the HMC, ensure that VIOS and IBM i partitions are set for
profiles synchronisation. This ensures that all dynamic configuration
changes are populated to the current profile in use.

The principle here is to move pFC1 to the first VIOS, check that
everything works fine, then move pFC2 to the second VIOS, check that
everything works fine.

From the HMC, create a pair of virtual server (on VIOS side) and
client
(on IBM i partition side) FC adapters on each VIOS. Your IBM i
partition
will get two new FC adapters. Let's name them vFC1 and vFC2. Each one
will have new assigned addresses, vWWPN1a, vWWPN1b, vWWPN2a and
vWWPN2b.
"b" WWPNs are to be used in case of Live Partition Migration (live move
from a server to another).

Prepare zoning and V5030 host definitions with the new addresses.
vWWPN1a and vWWPN1b to be in the same zone and host definition as
pWWPN1. vWWPN2a and vWWPN2b to be in the same zone and host definition
as pWWPN2.

From the HMC, with a dynamic LPAR operation, move pFC1 adapter from
the
IBM i partition to the first VIOS. At this time, there will be messages
in QSYSOPR message queue and in system history log stating that some of
the paths to volumes are lost. You loose redundancy but the partition
is
still running.

The SAN port which is connected to the physical port must be set to
allow NPIV. You can check that with lsnports VIOS command. In the first
VIOS, configure the new adapters (virtual which was created and
physical
which was moved from the IBM i partition) with cfgdev command

Still in the first VIOS, map the virtual server FC adapter to the same
(the one which was used by the IBM i partition) port with vfcmap
command. From the IBM i partition, use SST against the virtual FC
adapter (at IOA level) to check the status of the connection to the
SAN.

On the SAN switch, update zone configuration for vWWPN1a and vWWPN1b.
By
default, the light is not switched on for vWWPN1b and therefore, it is
not available on the SAN switch. If it is mandatory for this light to
be
on, you can switch it on from the HMC. From the IBM i partition, use
SST
against the virtual FC adapter (at IOA level) to check the status of
the
connection to the V5030.

On the V5030, update host definition for vWWPN1a and vWWPN1b. At this
time, disks paths redundancy becomes in place again. New path through
VIOS replaces the old path through the physical adapter. Messages in
QSYSOPR and system history log will confirm.

From the HMC, switch vWWPN1b light off if it was switched on
previously.

From the HMC, configure the IPL source of the IBM i partition to use
the new virtual FC adapter in place of the physical adapter.

Repeat the steps for the second physical FC adapter pFC2 and the second
VIOS, from the dynamic LPAR operation to V5030 reconfiguration.
Finally,
through SST, run MULTIPATHRESETTER macro to reinitialize the multipath
setup (this may speed up subsequent IPLs).

All the operations above are dynamic and do not require any outage. You
only have a short loose of redundancy, so those steps might be
scheduled
in a time window with a low impact in case of issue.

At a convenient time, you will have to power down the IBM i partition
and activate it to ensure that the IPL source is properly set on a
virtual FC adapter.

In case of issue, shutting down the VIOS for which the steps are in
progress, and dynamically assign back the physical FC adapter to the
IBM
i partition should work, as zoning and V5030 setup are still in place.
If the IBM i partition looses all disk access (most probably, it will
raise an A6xx-0255 or similar SRC code), you must try to avoid shutting
down the IBM i partition, as when the disks come back, it will continue
running where it was paused. But, if all your tries fail, and powering
down is mandatory, you can activate the partition using the backup of
the profile which was taken at the first step.

When everything is moved to the VIOS, cleaning operations can be done.

You will have to update SAN zoning and V5030 host definition to remove
any reference to pWWPN1 and pWWPN2 addresses.

On the IBM i partition, through SST, you will have to remove all
resources related to physical FC adapters.

Note that, if you use the same physical FC adapter (probably on a
distinct port) for tape access, you will have to run similar steps at
the same time.

Those steps are for the disk (and tape) access only. For the network,
this is another story which depends a lot on the current setup
(redundancy with link aggregation or with virtual IP addresses) and the
VLAN usage. However, I would start to set up the network first,
because,
in the past in some circumstances, I had to shut down both VIOS at the
same time to resolve SEA fail-over configuration issue.

I guess I do have one saving grace. Both production and DR are
running
off
V5030 SANs with real-time replication so I could take a small outage,
flip
over to DR and run there while we rebuild the prod box. Then when
prod
is
back, replicate back and startup prod again so minimal downtime.


On Fri, Aug 26, 2022 at 11:26 PM Rob Berendt<rob@xxxxxxxxx> wrote:

We're doing the VIOS connection. Works great.

Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 7310 Innovation Blvd, Suite 104
Ft. Wayne, IN 46818
Ship to: 7310 Innovation Blvd, Dock 9C
Ft. Wayne, IN 46818
http://www.dekko.com


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription
related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.