Le 30/10/2021 à 23:13, Laurence Chiu a écrit :
Thanks. That looks like what we need.

I forgot something. Most probably, the resource name of the Ethernet line will not be the same when the partition runs on the DR site. Resource names are closely linked to the hardware. So, when you will switch to DR site for the first time, new resources will be created. Therefore, the Ethernet line description must be updated as well.

Having said that, there are several solutions.

You can have several distinct Ethernet descriptions, such as one set for Production site and another one for DR site. Or, if you keep only one set whatever the site, you have to update the line description before varying them on, in order to apply the correct resource name depending on the site.

Resource names are created by the system, at IPL time or when a new adapter is dynamically added to the partition. And, once they are known by the system, if nothing else changes (such as hardware location), they will remain. It means that, when you start the partition for the first time on the DR site, resources will be created for the new hardware. Resources for the Production site will remain, but with an "not found" status. When you switch back the partition on the Production site, the resources related to this site will be available again with the same name and resources for the DR site will also remain with a "not found" status, but ready for the next switch. Again, this is true if no change on the hardware occurs.

Your startup program has to deal in some way with this resource name topic, if you decide to go with keeping the same Ethernet line description whatever the site your partition is running on.
First, set the Ethernet line descriptions not to vary on at IPL time (parameter ONLINE of CHG(ADD)LINETH command).
Then, depending on the serial number of the partition, update the Ethernet lines descriptions with CHGLINETH command, parameter RSCRNAME. I am not sure, but I seem to remember that when starting an IP interface over a line description which is varied off, the system varies it on; so there is no need to do it yourself. If I am wrong, you just have to include varying on the Ethernet line before starting the interface, with VRYCFG command.

Note that IBM Lab Services provides a tool to handle this situation: PowerHA tools for IBM i - Full System Replication https://www.ibm.com/support/pages/powerha-tools-ibm-i-full-system-replication

At startup time, this tool is able to retrieve the hardware configuration (using DSPHDWRSC command into a physical file) and compare it with the expected hardware depending on the situation (Production versus DR) so that it can start the appropriate IP on the appropriate Ethernet line with the appropriate resource.

But I do have a further query. For historical reasons which I won't go
into, the production server has on 4-port NIC. Each port on the NIC is
patched individually into a switch with two going to the primary and two
going to the secondary. They all have different IP's Then the four IP's are
bound to one virtual IP address which matches the servers DNS. I am told
the reason for this setup is we could stand a failure on up to 3 of the
switch ports but of course we would be toast if the NIC failed.

Yes, VIPA is a common setup. However, I would prefer using a link aggregration running at Ethernet layer 2. It simplifies the IP configuration. In your case, you would have 2 "physical" IP addresses in place of 4. And, depending on switches setup and capabilities, you could even create one aggregation with the 4 ports of your adapter over the two switches. If switches are stacked, it works. We have this setup on some of our devices. However, you have to take care of switches software level. We had the case of Cisco software which was requiring the reboot, at the same time, of both switches of the stack in only once case: when updating the software. Depending on business requirements, it can also be accepted to have a short communication break during unfrequent switches upgrades. I am not a network guy, so maybe there are now Cisco (or other vendors) software which support running stacked switches with distinct versions.

With the link aggregation, you have the same level of protection against failure than VIPA, but you have a better load balancing, because provided at layer 2, closer to the hardware. More information on how it works and how to proceed: https://www.ibm.com/support/pages/system/files/inline-files/i-ethernetlines-pdf.pdf

Anyway that requires some additional configuration of course. And since
each server is behind an application firewall and sends traffic to other
systems behind other firewalls we have to make sure outbound traffic has
the DNS IP else it will be blocked by the firewalls.

The DR site is similarly setup except of course all the IP addresses are in
a different subnet and behind different firewalls with rules based on the
primary IP there.

So we would create two IP interfaces setup as above with on interface as
primary and one as secondary and name them accordingly.

Then the startup program would interrogate the serial number and start the
appropriate interface.

Sounds like the perfect solution. I might get our tech guys to try it on
one of our test LPARs which we can IPL at will and access via the console
to check IP configuration has been done.


On Sat, Oct 30, 2021 at 10:42 PM Marc Rauzier <marc.rauzier@xxxxxxxxx>

Le 30/10/2021 à 11:13, Laurence Chiu a écrit :
It was mentioned that the production LPAR could have two IP stacks
one for the production site and one for the DR site. Then on IPL the
startup script could interrogate the serial number of the server it was
running on and implement the appropriate IP stack. That sounds like the
right answer but we don't have the expertise or knowledge to know how to
First of all set up IP interfaces (not stacks) not to automatically
start. Review AUTOSTART parameter of CHG(ADD)TCPIFC command.

You can setup gateways to specify preferred interfaces, so that they are
used only when on the appropriate site. Review CHG(ADD)TCPRTE command
and its BINDIFC parameter.

Then, in the startup program (the one in QSTRUPPGM system value), before
starting any service of sub-system, add a couple of lines as below:

Retrieve serial number from QSRLNBR system value (RTVSYSVAL command)

If serialnumber = siteone, start IP interface of siteone with STRTCPIFC
command (I would suggest to use an alias here in place of the IP itself)

if serialnumber = sitetwo, start IP interface of sitetwo still with

if serialnumber = an unexpected value, do nothing and report a message

Another approach would be to use a dhcp server to provide IP address
(review *IP4DHCP special value of INTNETADR parameter of CHG(ADD)TCPIC
command), and make sure that the dhcp and dns servers of the enterprise
are properly set so that dns is updated on an automatic way from dhcp.

This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.