× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Unless you have a high number of writes, I would go with the RAID 6. That is what I have on several of my systems. I have had two instances in the past were a RAID 5 failed us and I've been cautious about my data ever since.

Both stories were similar and within the past 10 years. We had a storage unit with 24 x 160G disk RAID5 with 2 hot spares. We had one drive go out. With other activity on the storage system, the RAID was rebuilding at a rate of 10MB/s and was going to take roughly 4-5 hours to complete. During the rebuild, we lost another disk and it took the whole array down. We spent hours rebuilding the array and restoring from backups. The hot spares are good, but do not provide "immediate" redundancy until after the rebuild is complete. The second story was similar, but we had an AC failure (1 of 4 unites went out). The storage array was closest to the failed unit. Because it couldn't get enough cool air, the internal server temp rose 15-20 degrees above normal and drives started failing in the entire rack.

I've been running RAID 6 now for the last 5 years and haven't experienced this. The performance impact has been very minimal. Even on my VMWare cluster with 20+ hosts.

-JA-

Jason Aleski / IT Specialist

On 12/4/2015 11:12 AM, Wilson, Jonathan wrote:
On Fri, 2015-12-04 at 07:46 -0500, Steve Pavlichek wrote:
RAID6 is a little slower due to 2 parity drives but better protection.
I usually recommend RAID5 with hot spare

You also have choices of Performance or Capacity option when starting RAID.
Performance option will give you 2 RAID sets and Capacity will give you 1
set.
My gut feeling would be to go with raid6. With larger disks and the
statistical chance of a URE kicking a drive starting to fall within the
total amount of data needing to be read to re-build a set it makes more
sense. (Although statistically it may be small, and not evenly
distributed between multiple drives, confusing the issue further.)

My gut also says: if 8 drives are first turned on on the same day and
one fails then you can bet another will also fail as it has the same
life span/hours of use.

Added into all of that, if a raid set was 100% utilised the chances are
than on a normal yearly workload only 20% of that data might be accessed
regularly. Rebuilding a degraded raid set will hit 100% of all the disks
hammering the heads all over the place for many hours, touching that 80%
for the first time in years... if its going to fail, that would
definitely be the time another one of the disks decided to URE.

While raid/6 is slower, the question has to be: will the additional
overhead/reduction in write, and/or read, IO make a noticeable impact on
the workload thrown at it, or is a case of it being slower, but no one
would ever know unless the system were under spec'd with 100% disk
utilisation workloads.

On a couple of curious side questions, does the i support interleaved
raid/5/6 or only parity disks? Also does it do read checking - read all
data and check the read data against the previously written parity? Just
curious as I know Linux mdadm doesn't and debates over if it should pop
up quite regularly but no one has sponsored it or wants to put in the
time for it themselves.

Here's a link to Sue Bakers parity set Techdoc
https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105880
That's an interesting read. I'm guessing that with dual raid6 sets, its
similar to creating a raid/0 (stripe) over the top of two raid/6 sets.

I vaguely recall that the "400" (and possibly 36/38) didn't create
stripes or JBOD's (possibly multi-volume, fill disk1 first, then 2,
etc.) but instead used some form of "scatter gun" approach where it
spread the data across the devices, but did so in a non-uniform way
based on the data and not the disk layout. Mind you, when I heard/read
this it was years and years ago, so my understanding might not have been
correct.


-----Original Message-----
From: Gad Miron
Sent: Friday, December 04, 2015 3:41 AM
To: midrange-l@xxxxxxxxxxxx
Subject: Raid Configuration

To the Hardware guys

a quick one:

Due to replacing 3 SSDs installed for test puposes with (the original) 3
HDDs
The BP is breaking the existing RAID5 set of 15 HDDs and is going to build
a 18 HDDs
RAID5 set (adding 3 HDDs)

My question, should we build istead a RAID6 set ?
what are the pros & cons?

This is a 8286 41A P8 machine with 18 283GB 15K discs all in the CEC
(plus 8 more in expansion 5887)
I believe that there is a EJ0P on-board controller .

So RAID5 or RAID6 ?

thanks
gad
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.