I'm going to repeat what I said before .... if you will protect
the SSDs with RAID5, any dual adapter configuration is likely to
add latency and you won't get the biggest bang for your buck.
When doing RAID5 with dual adapters, you want to have enough
devices to have 2 parity sets which means a minimum of 6. SAS
drives perform very well as 3 drive RAID5 parity sets.
Kirk Goins <kirkgoins@xxxxxxxxx> wrote on Sat, 11 Jan 2014
Sue.. No Concurrent Maint is the reason for not using the PCIe
card based SSDs.
Also I was mistaken ( Not Wrong ) :) about the CEC disk
backplane. When I did a WRKHDWRSC *STG I didn't see cards and
SST Work with Devices with Cache Batteries didn't show them
either. Turns out They were never allocated to the
Base/Hosting partition. using DLPAR and ADD they show up.
So to regroup... I am a little confused on the performance
side of things. Can you rate Best to worst in regards to
performance, Some of these options have not been discussed in
As this is a **performance** question, the answer depends on
whether you need the write cache for the data you choose to put
4 SSDs on the Base Disk Controllers ( 175mb Cache )
4 SSDs on a pair of 5805 Controllers (380MB Cache ) in the CEC
attached to an EXP24s
These two options will provide similar levels of performance
when write cache is able to handle all the writes. Chances are
the 175MB write cache will begin to experience write cache
overflow before the 380MB write cache adapter. Otherwise the
capabilities of the adapters are the same.
You mention below you have 5877. Is it filled with cards? Any
of the adapters you've mentioned can be placed in the 5877. I
would always opt for placement there over in the CEC due to
concurrent maintenance capabilities for the adapter.
4 SSDs on a ESA1 ( which appears it appears I would need to
add a 12x expansion drawer )
ESA1 can be installed in either a full height CEC slot or 58xx
Both ESA1 and ESA2 (same adapter, different tail stock for
placement) require the SSDs be in EXP24S.
4 SSD on a EAS2 plugged into a PCI Riser card in the CEC
In General is there any performance gain to placing the disk
controllers in the CEC when possible over placing the same
controllers out in a 12x Expansion?
To date in the lab for the SAS adapters, we have not been able
to measure a difference in IOPS, MB/s, or disk response time
based on card placement in the CEC versus an I/O expansion. Not
to say it isn't possible, but the workload we use to get the
disk measurements doesn't stress the I/O slots so PCIe
generation 1 in the I/O enclosure versus PCIe generation 2 in
the CEC doesn't make a difference.
In the CEC
Two FC5913 with 48 drives in 2 EXP24s.
You could add 1 EXP24S with the SSDs to this adapter. All you
would need to do is recable.
FC5901 Tape Card
1 Port on a 4 Port Ethernet Card
So your full height slots are full. Any cards you add to the
CEC would require the riser card and they would need to be low
The same hardware in a 5877 Expansion
Just those 4 adapters? This means you've got 6 more slots
available for adapters.
IBM Americas Advanced Technical Skills (ATS) Power Systems
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives