You know, we tried our best to throttle the performance of the Power 8.
It just can't be done.
Instead of each lpar having their own fiber card to the SAN, like on Power
6, we made them all share one.
We ran the saves on three lpars at once, through the same card.
Virtualized via VIOS.
The switch only supports 4GB at once, even though we tried an 8GB SFP.
We still managed to cut anywhere from 17% to 71.5% off of each individual
lpar's backup time.
The lowest lpar might have been even better but it switched from being a
'Host' lpar down to being a 'Guest' lpar. That, and it is still running
7.1TR8 while the other lpars are running 7.2. Overall, it still got
Excel spreadsheet with details available upon request.
Tape library used was an IBM 3576, TS3310, with fiber LTO4 drives.
Backup program was BRMS.
Times on spreadsheet are gleaned from DSPLOGBRM *BKU.
All saves done in restricted state.
The boss closely monitored the port on the SAN switch. He saw it peak out
You know, when we upgraded our tape drives from LTO3 to LTO4 I had
expected a bigger reduction in save times than we achieved. Who knew that
a Power 6 could not drive an LTO4 at rated speeds? Reminds me of when I
had upgraded a CPU a long time ago when we still had 3590's and the save
time rapidly improved.
Disclaimer: IBM did not supervise this test. No warranty that your
results may be anything like ours. Your Mileage May Vary (YMMV).
IBM Certified System Administrator - IBM i 6.1
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
Kendallville, IN 46755
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
This mailing list archive is Copyright 1997-2015 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact