× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Sue,

Thanks for the clarification and update.

< A S824 system unit can have up to 18 SFF3 disk bays (2.5" HDDs or SSDs) plus up to 8 1.8" SSDs for a total of 26 devices mounted in the CEC.>

Using option 3, high function backplane, fully loaded CEC
18 - 775 GB SFF-3 SSD for IBM I (ES0P) using Raid--5 12TB usable
8 - 387 GB 1.8'' SSD for IBM I (ES17) using Raid-5 - 2.7 TB usable (Are the 387 the largest 1.8 SSD available?)

Total of 14.7 TB usable.

Now the big question, create 3 to 4 I LPARs, non-VIOS, (previous thread with Jim and Rob)
One of LPARS might be an HA LPAR.
I hosting I
What would the performance be with the backplane controller?
Is there a controller that would offer better performance? (5913, ESA1, ESA3)??

Each of the three backplane options provides SFF-3 SAS bays in the system unit. These
2.5-inch or small form factor (SFF) SAS bays can contain SAS drives (HDD or SSD) mounted
on a Gen3 tray or carrier. Thus the drives are designated SFF-3. SFF-1 or SFF-2 drives do
not fit in an SFF-3 bay. All SFF-3 bays support concurrent maintenance or hot-plug capability.
In addition to supporting HDDs and SSDs in the SFF-3 SAS bays of the Power S824, the
storage backplane feature EJ0P supports a mandatory 8-bay, 1.8-inch SSD Cage (#EJTM).
All eight bays are accessed by both of the integrated SAS controllers. The bays support
concurrent maintenance (hot-plug). The SSD 1.8-inch drive, such as the 387 GB capacity
feature ES16 (AIX, Linux) or ES17 (IBM i), are supported.
The high performance SAS controllers provide RAID 0, RAID 5, RAID 6, and RAID 10
support. The dual SAS controllers can automatically move hot data to an attached SSD and
cold data to an attached HDD for AIX and Linux environments by using the Easy Tier function.

Paul

-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Sue Baker
Sent: Tuesday, August 26, 2014 3:21 PM
To: midrange-l@xxxxxxxxxxxx
Subject: RE: New Power8 Disk Perf Issues


"Steinmetz, Paul" <PSteinmetz@xxxxxxxxxx> wrote on Tue, 26 Aug
2014 18:09:00 GMT:

Sue,

<I will agree for POWER7 and earlier systems. I will disagree for the
POWER8 S814 and S824 models with high function backplane. The
on-board disk IOAs are better than the feature
ESA3 and 5913 adapters used by many on POWER7 for write intensive and
write response time sensitive workloads.>

When you state high function backplane, you're referring to <Third is
high function backplace with 7.6GB of write cache and maximum drive
bays.> correct?


Correct.

So, if using CEC's DASD, you should almost always consider option 3
from a performance point of view.


Performance answers are always it depends on the workload requirements.

Also, are you stating the option 3 is equal or outperforms a 5913. So
a fully loaded CEC with 8 775gb SSD (5.4 tb useable with Raid 5) with
option 3, high function backplane would be about the best performance
for an i5/0S LPAR.


No, the architecture of a POWER8 is vastly different than its predecessors. A S824 system unit can have up to 18 SFF3 disk bays (2.5" HDDs or SSDs) plus up to 8 1.8" SSDs for a total of
26 devices mounted in the CEC.

This compares to 8 SFF disk bays on POWER7 720 and 740 systems or 6 SFF disk bays on POWER7 770 and 780 systems.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.