× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



So there is logic to what they are doing here yes. Add layers which direct traffic to the active database. As others have mentioned the process of switching from one server to another is not a simple switch like transferring from utility to generator or back. With PowerHA you must:

A) End any tasks using the iASP in question. All of them. 100%
Could take seconds but more often a few minutes.
B) Vary off that iASP
Depending on a lot of things this could and normally does take minutes. It must flush all transactions from memory, close files, etc.
C) Flip the cluster resource group, swapping primary and secondary.
D) Vary ON the iASP on the new primary.
Depending again on a lot of things this could and normally does take minutes. SAP has A LOT of files, I've seen nearly 100K PFs in ONE library! You can't shut down and start up a DB of that size in zero time. You are effectively running the DB IPL steps on this iASP.
[ Note to POWERHA knowledgeable folks: YES Steps B, C, D all happen as part of "C" but this is what is truly happening.]
E) Bring up any needed tasks on the newly active machine.

Now there is no way that's happening quickly enough that the user won't notice. Yes flash storage helps, and certainly having less stuff actually running on that partition helps too (the layering of servers) but it's still not instant. It takes times.

As to the 5250 part seems like it COULD be done, sorta by having a server running the 5250 jobs and using remote data access just like sAP describes here. THing is that job would need to be smart enough to wait and retry I/O ops while the server is swapping nodes. AND if the server (partition) actually running YOUR 5250 job fails you've got the same problem you started out to solve, It's just more complicated now!!

Rube Goldberg comes to mind...

- Larry "DrFranken" Bolhuis

www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.

On 10/2/2018 9:48 AM, Steinmetz, Paul wrote:
Charles,



I reviewed the link, many components, appears complicated.

Why no 5250, that could be a show stopper?



Establishing a highly available SAP landscape requires removing any single points of failure (SPoF). To this end, it is convenient to conceptually separate the SAP landscape into components called resources. Each resource is a required component and must be redundant. Some components like application servers can simply exist in multiple. Other components, such as the database or file system, exist either as a clustered or switchable resource. A highly available SAP environment relies on the cluster services of the underlying operating system upon which it runs. On IBM i, the cluster services are provided by PowerHA SystemMirror for i. The redundant switchable resources in the SAP environment are controlled by PowerHA SystemMirror for i.



Using the new high availability characteristics of SAP and PowerHA SystemMirror for i, it is possible to create a highly available SAP environment which does not require the applications to end during a planned or unplanned outage.



The high availability solution described conceptually in this paper and in more detail in SAP note 1635602 separates the data and application components of the SAP environment across IBM i partitions and SAP instances to eliminate SPoF within the runtime environment.

For the data component, the SAP database exists in an IASP on an IBM i node. One of the PowerHA SystemMirror for i replication technologies is used to replicate the database to a second node. This provides the capability to switch between the primary and backup node.



Application high availability with PowerHA SystemMirror for i is achieved by implementing the SAP Central Services (SCS/ASCS) instance and the corresponding enqueue replication server (ERS) instance in two IBM i nodes, one node designated as the primary SCS/ASCS and the other as the backup. The SCS/ASCS runs on the current primary node, the ERS runs on the current backup node.

When combined with redundant SCS/ASCS instances, multiple remote SAP application server instances inherently provide application server redundancy. Multiple instances also provide application high availability during kernel updates. In figure 6, multiple application servers are configured, and login groups managed by SAP central services could be used to assign users to a defined set of application servers.



Paul



-----Original Message-----

From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Charles Wilt

Sent: Monday, October 01, 2018 6:25 PM

To: Midrange Systems Technical Discussion

Subject: Re: HA load balancing solution



So you want a IBM i cluster....



The functionality has been in the OS for a while now...



Your app would have to be designed to support it..and 5250 isn't going to

cut it.



Take a look at the SAP on i HA doc...

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/5cb5ed706d254a8186256c71006d2e0a/7d18ff197d5468e186257a1c005c6a10/$file/ha%20options%20for%20sap%20using%20powerha.doc



Charles



On Mon, Oct 1, 2018 at 1:07 PM Steinmetz, Paul <PSteinmetz@xxxxxxxxxx>

wrote:



Anyone in the group running an HA load balancing solution?

I've heard of one vendor now offering this.

With this solution, there is no failover.

Two (or more) LPARs share the active load.

If one LPAR goes down for maintenance, (planned or unplanned) the other

LPAR continues to keep all apps running.

Not all applications (IBM and/or 3rd party) may support this, however?



Thank You

_____

Paul Steinmetz

IBM i Systems Administrator



Pencor Services, Inc.

462 Delaware Ave

Palmerton Pa 18071



610-826-9117 work

610-826-9188 fax

610-349-0913 cell

610-377-6012 home



psteinmetz@xxxxxxxxxx

http://www.pencor.com/





--

This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list

To post a message email: MIDRANGE-L@xxxxxxxxxxxx

To subscribe, unsubscribe, or change list options,

visit: https://lists.midrange.com/mailman/listinfo/midrange-l

or email: MIDRANGE-L-request@xxxxxxxxxxxx

Before posting, please take a moment to review the archives

at https://archive.midrange.com/midrange-l.



Please contact support@xxxxxxxxxxxx for any subscription related

questions.



Help support midrange.com by shopping at amazon.com with our affiliate

link: http://amzn.to/2dEadiD



--

This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list

To post a message email: MIDRANGE-L@xxxxxxxxxxxx

To subscribe, unsubscribe, or change list options,

visit: https://lists.midrange.com/mailman/listinfo/midrange-l

or email: MIDRANGE-L-request@xxxxxxxxxxxx

Before posting, please take a moment to review the archives

at https://archive.midrange.com/midrange-l.



Please contact support@xxxxxxxxxxxx for any subscription related questions.



Help support midrange.com by shopping at amazon.com with our affiliate link: http://amzn.to/2dEadiD


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.