Hello Diego,
Am 09.12.2024 um 22:35 schrieb Diego E. KESSELMAN <diegokesselman@xxxxxxxxx>:
I have found that GO SAVE->22 is not able to run the FTP command, because of the small commands set, so I added this 2 libraries to make it functional.
Curiosity question: Why did you need to run the FTP command from a limited restore environment?
But the option 21 works, slow but works.
Slow in which regard? IPL? Restore?
Saving NFS V3 on UDP is slow.
Uhm. I can't duplicate this claim, from decades long own experience.
However, I know that NFS over UDP is much more sensitive to network latency, compared to TCP. It was once common knowledge to never run NFS across routers, because the routers in the late 1980's introduced quite a few ms of latency.
If you compare saving on NFS vs iSCSI VTL (Falconstor) , the VTL is really faster.
This observation can base on many reasons and side effects, not just UDP being (perceivably) inferior. In addition, comparing iSCSI — being a block oriented protocol — with NFS — being a file oriented protocol — seems to me a bit like comparing apples to oranges.
But... If you have a fast 10Gb network with jumbo frames, and a target Linux with "more than 32GB of RAM", SSD and fast CPU, you can get a decent transfer rate.
Jumbo frames are a no go in my scenario, because NFS based image catalogs only utilize UDP through the service adapter code. I'm not aware about the possibility to configure the MTU size for the service tools adapter in 7.3, so it's fixed at 1500 Bytes. UDP has no payload negotiation abilities, compared to TCP which negotiates the maximum segment size during connection setup. Thus, answers from the NFS server which are larger than 1500 Bytes are dropped by the service tools adapter as invalid, oversized frames.
On the other hand, I was astonished about the transfer rate obtainable to an older machine as NFS server over "just" 1GbE. Not too precise measuring from first write to the empty image to the image to completion indication in the message line calculate to roughly 33 MBytes/s. This includes "wait time" between the individual calls to the data/file system specific sav commands. I'm pretty sure that the limitation is largely the target Linux machine's I/O subsystem.
As far as I'm aware, at least for Debian, the default is to accept whatever the client asks for.
I am not sure.
Well, that's what I've observed first-hand over years. While 2.4 Kernels needed quite some parameters (including rsize and wsize) to be set to work with decent speed, or at all, this was no longer necessary at least with 4.x kernel releases. Maybe even 3.x.
Old Debian and Ubuntu Server releases used NFS with UDP
How old is "old"? And what do you precisely mean with "used"? Are we talking about the kernel-based NFS server or the client — being kernel based, because mounting filesystems is handled in the kernel —, or both?
firewall was an option
To me, largely an obstacle in (properly shielded) LAN environments. :-)
and ATFTP-Server worked properly.
I use tftpd-hpa, which copes better with subtle incompatibilities I encountered with different PXE BIOS implementations.
On latest releases I have found Debian start changing a couple of these defaults.
Maybe they changed defaults, but my experience about automatic negotiation still stands valid for me.
RedHat and CentOS are different animals an need more tuning.
I know why I stay away from these. ;-)
Wow! Thank you !
Honor where honor is due!
I will certainly procrastinate automating the Save 21, though. This is handled by QMNSAVE CLP, which gets parameters in one large variable from the menu (I presume), and also calls further CL programs. Too much of a tangle for me currently.
:wq! PoC
As an Amazon Associate we earn from qualifying purchases.