So I do get the challenge of making this work and I support such efforts! IBM says not supported and all that, but far be it from me to not do things that are Unsupported! LOL!

Where I ALWAYS focus though is the reverse, how is this backup useful should I need to recover from it?

Likely in this order, the needs are:

1) Restore 'a thing,' which could be any IBM i object but most often is a file or file member.

2) Copy a group of things to another system. This may be to a test system, or development system for example.

3) Recover a lost system.

The Frequency of #1 and #2 together are probably 1000 times more than #3.

BUT when #3 happens you have to know how. AND have everything needed to make it work. For example if you find you need to regen the BOOTP directory contents but all you have is the images on Linux, then you may be stuck.

The beauty of having a tape is that you can find 100 places that can read a tape and recover your system. Similar with USB. If you are planning on using a non-supported Linux boot setup to recover in a disaster, then the recovery is 100.00% on you because *NOBODY else will understand how all the pieces go together!

- DrF


On 12/9/2024 4:35 PM, Diego E. KESSELMAN wrote:
Hi Patrick,

On 09/12/24 14:43, Patrik Schindler wrote:
When we have more than 500GB I prefer to create a GO SAVE 22 to the Remote Optical Image Catalog + QGPL + QUSRSYS , and option 23 to a Virtual Tape Image Catalog , just because it is faster.
I presume this is also more error prone when trying to automate, yes?

Well... no. The original procedure was published by IBM using BRMS, where you can find something similar, but this is a native SAV* commands approach.

Let say this is my (not-so) secret sauce:

I have found that GO SAVE->22 is not able to run the FTP command, because of the small commands set, so I added this 2 libraries to make it functional.

This is reliable, because you can create a CL replacing the Option 2x with your own commands.

The Virtual Tape Image Catalog part is the step #2, and same, you can build your own CL, but requires enough disk space.

In fact, I started working on the BRMS procedure and found it was too complex, so I decided to use my own recipe.


But the option 21 works, slow but works.
Slow in which regard? IPL? Restore?

Saving NFS V3 on UDP is slow. If you compare saving on NFS vs iSCSI VTL (Falconstor) , the VTL is really faster. If you compare with local Image Catalog (Tape or Optical) on fast disks, Image Catalog can save as fast as the VTL, sometimes faster.

The IPL and Restore are slow too.

But... If you have a fast 10Gb network with jumbo frames, and a target Linux with "more than 32GB of RAM", SSD and fast CPU, you can get a decent transfer rate.

As far as I'm aware, at least for Debian, the default is to accept whatever the client asks for.

I am not sure. Old Debian and Ubuntu Server releases used NFS with UDP, firewall was an option and ATFTP-Server worked properly. On latest releases I have found Debian start changing a couple of these defaults.

RedHat and CentOS are different animals an need more tuning.

Async gives a huge increase in write performance for some workloads, so I tend to add it in any case, when writing is mandatory. Drawback is a certain chance to data loss when the NFS server machine fails. A risk I take, especially for a backup machine.
Totally agree
Your Initial work some years ago was the primary inspiration for me to take the time and learn how to deal with this. After working with real tape (libraries) became less and less of an option the more we thought it through.

Wow! Thank you !



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.