Le 05/11/2015 20:15, Evan Harris a écrit :
Hi all


We have been doing some lab work on PowerHA and the technologies around it
and have a question regarding IASP storage:.

Is it possible to attach an IASP to a new system and keep the contents of
the IASP intact, and if so h ow ?

If I had 3 LUNs defined in an IASP and copied that storage to 3 other LUNs
is it possible to then attach that storage as an IASP to a new system and
bring the IASP up on the different system ?

The source and destination systems must be members of the same cluster and must be defined in the device domain. You do have to have one destination system per target system until 7.2. There are improvments in 7.2 regarding this sentence but I did no tested them yet.
While the target IASP is varied off, you can then use a flashcopy from the source volumes to the target volumes and then, vary on the IASP on the target system. PowerHA provide STRxxxSSN commands to initiate the flashcopy. xxx is SVC for an SVC (and brothers like V7000) external storage. xxx is ASP for DS8x00 external storage.
I am not aware of any possibility to do the same clone operation with internal disks.

We do it every night on around 20 clusters to perform the backup to tape operation on the target systems.

The PowerHA SystemMirror for i redbook as a chapter for this kind of operations.

I realize there could be some application dependencies missing like user
profiles, JOBDs or whatever, but from a pure data/storage perspective is it
possible to do what I am asking.

Yes. The administrative domain provided by PowerHA can help you to synchronize user profiles, job descriptions, job queue definition (not the content), and so ....

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].