|
At the moment since we are not using VIOS, we replicate all all the LUNsUnderstood, this is a Full System Replication.
from production to DR including the IPL source volumes.
OK for that, but not relevant to using VIOS or not.
On IPL of the DR instance we have scripts or configurations I think, that
look at the serial number of the Power server it's running on and then sets
either a primary or secondary network configuration to account for the
different IP addresses and subnets in the DR site.
Not sure how it's done but the DR LPAR sees its LUNs okay since it's define
in the HMC.
There is still an IO adapter configuration that the HMC manages. This is the virtual client (on the IBM i LPAR) and server (on the VIOS) FC adapters pair. The VIOS is a bridge between the virtual server FC adapter and the physical FC adapter. So there is also a configuration step in the VIOS.
What I think you are saying is, because you have VIOS the actual
configuration of the adapters is no longer set in the HMC but in VIOS (the
HMC only knows about the physical cards.
Since the VIOS configuration isCorrect. Just a detail but an important one. Due to single-level storage architecture of the IBM i, all disks of the system ASP (Auxiliary Storage Pool) retain data (data is all, including operating system, but microcode), including the IPL (load) source (there is an exception here if using move data from the load source for very rare and specific operations).
specific to the actual server configuration etc. then we need a separate
VIOS configuration for each Power server. But all the other data on the SAN
such as IPL source, LUNS for the data etc. are replicated between the two
data centres.
No. The IPL is initiated using the tagged IO adapter that the HMC is aware of. If there is one disk with a microcode managed by this adapter, this microcode will load (if there are more than one due to some storage zoning/misconfiguration, you will be in trouble). Then it tries to get all disks, and paths to them, it had before the last power down. For that, it uses the configuration of the partition at IPL time. Wether it has physical or virtual devices does not really matter. It does need all disks in the system (and basic user if any) ASP. It recognizes disks by the content of specific tracks that the microcode is only able to handle. This is the reason why DR solution like yours work. All the disks hardware is distinct before and after an IPL when you activate your DRP. Serial numbers are distinct, but again, the specific tracks are the same thanks to your Full System Replication setup.
So during IPL the bootstrap code looks first at the VIOS configuration to
know what adapters it has (physical or virtual) before branching to the OS
code? Or something like that?
Sorry, I cannot comment here, as I have absolutely no knowledge about IBM z world :-)
So using my mainframe background VIOS is sort of like a cutdown version of
PR/SM on Z boxes?
On Tue, Aug 30, 2022 at 10:05 PM Marc Rauzier <marc.rauzier@xxxxxxxxx>
wrote:
Le 30/08/2022 à 09:35, Laurence Chiu a écrit :
Exactly.LUNs
But that raises an interesting point. Everything is on the SAN and the
are all replicated from production to DR. So in DR we can spin up the DRVIOS disks retain AIX operating system and VIOS setup. They are
instance off the replicated LUNs and with only some minor configuration
changes during startup to alter the host IP addresses to be in the DR
server network subnet it all comes up.
Now if the VIOS is on the SAN what would happen when that LUN was
replicated to the DR site and that replicated LUN was defined as the load
source? Or is the VIO configuration separate from the load source?
completely distinct than client partitions volumes. Even if they reside
on your external storage box, they are distinct volumes. And, on top of
that, in your DR setup, VIOS residing on the DR side is *not* used as DR
for VIOS residing on the production side. VIOS are closely tighted to
the server hardware and are not replicated. You can consider them as an
extension of the server's firmware (indeed, they are part of PowerVM
solution just like the server's hypervisor PHYP). So, in your target
setup with VIOS, you will have dual (for local redundancy) VIOS on each
server. And all the 4 VIOS must be up and running, updated, backed up,
maintained, just like production partitions. VIOS must be running for
the client partitions to work.
On Tue, Aug 30, 2022 at 3:53 AM Rob Berendt <rob@xxxxxxxxx> wrote:midrange-l@xxxxxxxxxxxxxxxxxx>;
Especially since the OP is already using SAN
Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 7310 Innovation Blvd, Suite 104
Ft. Wayne, IN 46818
Ship to: 7310 Innovation Blvd, Dock 9C
Ft. Wayne, IN 46818
http://www.dekko.com
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of
Larry "DrFranken" Bolhuis
Sent: Monday, August 29, 2022 11:38 AM
To: Midrange Systems Technical Discussion <
Re:Marco Facchinetti <marco.facchinetti@xxxxxxxxx>
Subject: Re: [very long] Moving from physical FC adapter to VIOS (was:
listVTLs and backup time)
CAUTION: This email originated from outside of the organization. Do not
click links or open attachments unless you recognize the sender and know
the content is safe.
On 8/29/2022 11:25 AM, Marco Facchinetti wrote:
The biggest debate will be internal or external disks for your IBM i.Ahh yes, however when presented the list of capabilities provided by SAN
that internal disk will never do, the majority of my customers go SAN.
Internal disk DOES have it's place but that place is shrinking daily!
- DrF
--
IBM Champion for Power Systems
www.iInTheCloud.com - Commercial IBM i and Power System Hosting
www.iDevCloud.com - Personal IBM i Hosting
www.Frankeni.com - IBM i and Power Systems Consulting.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
relatedTo post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription
listquestions.
Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
relatedTo post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription
--questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.