Make sure you're either fully configured for IPV6 or disable it. I've had lots of performance problems with newer machines and IBMi versions. 6.1 and 7.1 have IPV6 enabled by default. If you aren't fully configured for IPV6 (ie an IPV6 DNS server on the machine) the IP stack spends an inordinate amount of time trying to find the other machine. It seems that time is spent on EVERY packet. Most of my experience is with Java applications which always favor IPV6 unless explicitly configured to prefer IPV4. I don't have much experience with the FTP client and server on IBMi, so this may not actually be a factor. But it's something to check.

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Thursday, January 10, 2013 7:44 AM
To: midrange-l@xxxxxxxxxxxx
Subject: Gb Ethernet

Shouldn't I be getting a lot better throughput?

Help me out with my math. I've got an Ethernet line on 1 lpar talking to an Ethernet line on another lpar. They both say:
Current line speed . . . . . . . . : 1G
Current duplex . . . . . . . . . . : *FULL

Target system:
Resource Type
CMB03 268C
LIN04 6B26
Source system:
CMB30 181C
LIN06 181C
CMN242 181C

They are both on our same 10.17.6 subnet.

I FTP'd a sizeable file and got these results:

Size, in bytes, of save file: 13,458,505,728
Seconds to perform transmission: 2,222
Bytes/sec: 6,056,933.271
bits/byte: 8
bits/second: 48,455,466.167
Gb/bits: 0.000000001
Gb/sec: 0.048455466

iNav's Management central says lan utilization was minimal.
iNav's says percent busy of disk was minimal. (currently 2-7%) Source system has 64 disk arms.
Target system is a guest on the source. It has 6 equal "arms".

Shouldn't I be getting a lot better throughput? After all, 0.05 is not 1Gb.

I am not interested in any virtual ethernet backplane type solution due to some H/A concerns.

Rob Berendt

This thread ...


Return to Archive home page | Return to MIDRANGE.COM home page