× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Dan,

That's curious. I have dozens of partitions on POWER5 6 & 7 and IBM i 6.1 and 7.1 with the default IPV6 settings and I've not encountered any issues. I just tested an FTP between two i7.1 partitions (with IPV6 active) and got pretty good speed:

169478144 bytes transferred in 1.726 seconds. Rate 98214.836 KB/sec

Now the only IPV6 address active is the loopback address. Did you have more than that configured? Was DNS configured to use IPV6?

I'm interested to understand where the bottleneck is.

- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 1/10/2013 10:27 AM, Dan Kimmel wrote:

Make sure you're either fully configured for IPV6 or disable it. I've had lots of performance problems with newer machines and IBMi versions. 6.1 and 7.1 have IPV6 enabled by default. If you aren't fully configured for IPV6 (ie an IPV6 DNS server on the machine) the IP stack spends an inordinate amount of time trying to find the other machine. It seems that time is spent on EVERY packet. Most of my experience is with Java applications which always favor IPV6 unless explicitly configured to prefer IPV4. I don't have much experience with the FTP client and server on IBMi, so this may not actually be a factor. But it's something to check.

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Thursday, January 10, 2013 7:44 AM
To: midrange-l@xxxxxxxxxxxx
Subject: Gb Ethernet

Shouldn't I be getting a lot better throughput?

Help me out with my math. I've got an Ethernet line on 1 lpar talking to an Ethernet line on another lpar. They both say:
Current line speed . . . . . . . . : 1G
Current duplex . . . . . . . . . . : *FULL

Target system:
Resource Type
CMB03 268C
LIN04 6B26
Source system:
CMB30 181C
LIN06 181C
CMN242 181C

They are both on our same 10.17.6 subnet.

I FTP'd a sizeable file and got these results:

Size, in bytes, of save file: 13,458,505,728
Seconds to perform transmission: 2,222
Bytes/sec: 6,056,933.271
bits/byte: 8
bits/second: 48,455,466.167
Gb/bits: 0.000000001
Gb/sec: 0.048455466

iNav's Management central says lan utilization was minimal.
iNav's says percent busy of disk was minimal. (currently 2-7%) Source system has 64 disk arms.
Target system is a guest on the source. It has 6 equal "arms".

Shouldn't I be getting a lot better throughput? After all, 0.05 is not 1Gb.

I am not interested in any virtual ethernet backplane type solution due to some H/A concerns.


Rob Berendt


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.