|
Hence the recommendation to get some performance studies before you decide on a configuration. Also current workload is just a guide to what might happen in the near future. I have a customer that deployed a really cool new application. Well designed and saved the company serious money. We've gone from two cores to five, and looking for more as the application is more widely adopted. My point is as part of the MPG study you have to use the crystal ball to make some assumptions about growth and new application models. Planning for today is OK, but I think some healthy thought about the future is in order.
Jim Oberholtzer
Agile Technology Architects
On Mar 31, 2018, at 6:25 PM, DrFranken <midrange@xxxxxxxxxxxx> wrote:
Very hard to tell but remember they support different configurations. The EJ14 can drive up to 96 drives in four 24 drive drawers. The EJ1D can only do 42 (18 internal plus 1 drive of 24 external). So if you have 42 or less drives I'm guessing the EJ1D's will do just fine. Perhaps if they are all SSD and you're going to abuse them then an EJ14 would do better.
As a consultant I just have to say: "It Depends." :-)
- Larry "DrFranken" Bolhuis
www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.
On 3/31/2018 7:20 PM, Steinmetz, Paul wrote:--
Larry,
How much improvement do think we would see with a EJ14 compared to the EJ1D?
Paul
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of DrFranken
Sent: Saturday, March 31, 2018 6:54 PM
To: Midrange Systems Technical Discussion
Subject: Re: EJ1D/EJ1M Write Cache (Was Is this S914 config a good replacement for my S814)
Similar technology yes but I'm certain it blows the 5913 away. That was a great card at it's debut but it's a PCIe gen2 card, now many years old, and its position as 'lead dog' is long past. I would say these new cards are much more likely the card compares to the latest PCIe gen3 cards. The lower cache is likely due to the physical limit of 42 drives that can be attached. The newer EJ14 and friend can support 96 drives so more cache is important there.
The card actually carries the same CCIN as the RAID cards used in the
POWER8 41A and 42A serves and those cards performed quite well.
I have nothing against the FC #5913 but at this point there are better faster cards that can drive more drawers more quickly. If you have them to migrate and they do the job no specific reason to retired them that I'm aware of.
- Larry "DrFranken" Bolhuis
www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.
On 3/31/2018 4:03 PM, Steinmetz, Paul wrote:--
Thanks Larry,
I just couldn't find those specs.
1) Would you say the #EJ1D with integrated controllers is very similar to a 5913 PCIe2 1.8GB Cache RAID SAS Adapter.
Might even be the same card.
2) I read where the 5913 can be used in a P9 S914 or P9 S924, it is one of the few features NOT being withdrawn on the P7 withdraw list.
Paul
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of
DrFranken
Sent: Saturday, March 31, 2018 2:49 PM
To: Midrange Systems Technical Discussion
Subject: EJ1D/EJ1M Write Cache (Was Is this S914 config a good
replacement for my S814)
Paul,
Looks like 1.8GB Compressed to 7.2G effective. Note the EJ1M and EJ1D are comparable but the 1M is set up for 12 drives + RDX while the EJ1D is 18 drives.
From IBM announce material:
(#EJ1D) - Expanded Function Storage Backplane 18 SFF-3 Bays/Dual IOA with Write Cache/Opt Ext SAS port Expanded Function Storage backplane with dual integrated SAS controllers with write cache and optional external SAS port. High performance controllers run SFF-3 SAS bays in the system unit. Dual controllers (also called dual I/O adapters or paired controllers) and their write cache are placed in integrated slots and do not use PCIe slots. Write cache augments controller's high performance for workloads with writes, especially for HDD. 1.8 GB physical write cache is leveraged with compression to provide up to 7.2 GB cache capacity. The write cache contents are protected against power loss with flash memory and super capacitors removing the need for battery maintenance.
Read all about it here:
http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_sm
/3/760/ENUS9009-_h03/index.html&lang=en&request_locale=en
- Larry "DrFranken" Bolhuis
www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.
On 3/31/2018 2:23 PM, Steinmetz, Paul wrote:--
Gad
+1
Back in 2012, when I went from 20- 141 15kHDD to 18- 177GB SFF-2 SSD w/ eMLC (IBM i) Enterprise, my run times saw a 4x improvement, 1 hour jobs ran in 15 minutes.
As Larry stated, the DASD controllers are also equally important.
I went from 2780 Ctlr w/Aux Write Cache maximum compressed write cache of 757 MB and a maximum compressed read cache size of 1 GB to dual 5913 PCIe2 1.8GB Cache RAID SAS Adapter Tri-port 6Gb.
I will be doing this again, but not from HDD.
I'm still waiting confirmation on the amount of cache on the P9 imbedded SAS controller.
As an option, I looking at dual PCIe3 12 GB Cache Raid Plus SAS Adapter Quad-port 6 gb x8 EJ14.
Paul
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf
Of DrFranken
Sent: Saturday, March 31, 2018 2:12 PM
To: Midrange Systems Technical Discussion
Subject: Re: Is this S914 config a good replacement for my S814
I can say with confidence that replacing a total of 25 15K Spinny drives with 18 SSDs will make you a very happy admin! I would split into 2 RAID sets to enable each of the controllers to 'own' one set.
That thing will be more like a Hellcat next to a slant-six Duster!
- Larry "DrFranken" Bolhuis
www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.
On 3/31/2018 2:02 PM, Gad Miron wrote:
--
Jim, Dr.
Thanks for the educative responses
I'll have my BP look into the MPG option.
I'm quite happy to learn that SSD write speed are on par with it's
read speed.
Current 814 machine has 18 59E0 (283Gb 15K) disks in CEC plus 8
19B1 (283Gb 15K) residing in expansion EXP24S5 (in two Raid5 sets)
All disks driven by a dual controller in the CEC (forgot it's type
but it has a 4 Mb cache I think) Memory is 128GB.
(this is no Prius me think)
819 candidate is to have 512 Gb memory and 18 931GB Mainstream SAS
4k
SFF-3 SSD - all in SEC. with dual controller (again I couldn't find
the type but this one has 7 Gb cache)
So
Do 18 SSD "spindles" suffice ?
What raid configuration is best, two sets of 9 SSDs? one set of 18?
what about hot spare?
looking at the link you referred me to I'v noticed the following statement:
With their large capacity and lower cost per gigabyte, the drives
can provide a very cost-effective and footprint-effective solution
for many mainstream (*previously known as read-intensive*) configurations.
Note these drives are designed for *workloads with modest write
requirements....*
What say you?
TIA
Gad
message: 4
date: Sat, 31 Mar 2018 11:45:02 -0400
from: DrFranken <midrange@xxxxxxxxxxxx>
subject: Is this S914 config a good replacement for my S814 Re: (was:
What is the difference between Flash storage and a Flash Storage
system like the V9000/V7000)
Gad,
Good idea to change the Subject if you want to catch the
eye of those who can help!
Jim already made several good points but overall I would
amplify his suggestion that we don't know enough. You might have
only
7 drives that provide that 6TB of space. If you do they are 1.1TB
and 10K RPM so the proposed SSD configuration would look like a Top
Fuel dragster against my Prius. Or you might have multiple drawers
of 139GB 15K RPM drives on several large cache controllers in a
mirrored environment. MASSIVE difference between those two options!
In either case it's likely the new SSDs will win big but
we're guessing.
One more thing, Where are you getting the 'SSD writes
are only slightly faster than HDD writes' statement? Waaaaay back
the original 70GB SSDs that IBM sold for a kings ransom had write
speeds of about 120MB/s which is rather comparable to a 15K
spinning drive on streaming writes. But the SSD generations
released in 2016 were well over 400MB/s (as high as
470MB/s)
And in IBM's latest SSD announcement they state: Write
performance is "more than 25 times that of a standard 15K HDD" and
"the number of drives is still a factor in achieving satisfactory
performance, especially for IBM i."
Read all about em here:
https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=
an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS117-086
So I think you have some outdated information there.
Even PC SSDs today are pretty close to parity for read and write
performance with writes only a tick slower than reads any more.
- Larry "DrFranken" Bolhuis
www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.
On 3/31/2018 6:45 AM, Gad Miron wrote:
Calling all sages
We're going to replace our S814 (3 cores activated) 6 TB internal
HDDs machine with a S914 (3 cores activated) 18 931GB Mainstream
SAS 4k SFF-3 SSD machine.
Is this a viable DASD configuration for a write intensive environment?
(it is common knowledge that SSD writes are only slightly faster
then HDD writes right?)
TIA
Gad
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To
subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our
affiliate
link: http://amzn.to/2dEadiD
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link: http://amzn.to/2dEadiD
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate link: http://amzn.to/2dEadiD
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate link: http://amzn.to/2dEadiD
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.