× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



The DUPMEDBRM from virtual back to physical tape has far less disk
activity. Which is to be expected because it's READING disk instead of
WRITING disk.

Utilization in single digits.


On Fri, Mar 21, 2014 at 10:30 AM, Jeff Crosby <jlcrosby@xxxxxxxxxxxxxxxx>wrote:

Currently doing a DUPMEDBRM from a 3580 device to virtual tape.

NOW the drives are cranking. The 8 drives in the CEC are anywhere from
12-30% over a 3 minute period. On a couple of refreshes I saw some drives
in the 40% range. The other 8 drives in the expansion units are all 5% or
less.

We'll be following that with a DUPMEDBRM from virtual back to an offsite
tape on the 3580 device. If that's much different I'll post.



On Thu, Mar 20, 2014 at 4:48 PM, Jeff Crosby <jlcrosby@xxxxxxxxxxxxxxxx>wrote:

I watched the disk usage during the billing and subsequent reports.
Other than a couple of 12% for a short time, it never went above single
digits.

I watched the disk usage during the subsequent SAVCHGOBJ of all user libs
to tape. In 2-3 second bursts, some drives would hit 20% or even 33%, but
never for more than a few seconds. Then back to single digits.




On Thu, Mar 20, 2014 at 12:08 PM, DrFranken <midrange@xxxxxxxxxxxx>wrote:

It Depends. :-)

Seriously it really does.

If (which is impossible) you had 100% reads then your I.O times will be
approximately 12.5% slower (1 of 8) because of the hot spare not being
available to 'contribute'.

If (also impossible) you had 100% writes then you should see more than
50% longer response times as there are two additional ops required plus
the hot spare not contributing and don't forget the RAID card will be
more busy tracking two RAID sets.

But of course you do a mix of reads and writes, more reads less penalty
more writes more penalty.

And then there is queuing theory to deal with. If your disks are
currently 3 to 5% busy you're not going to care much if they go to 5 to
8% busy. But if they are already 35 to 40% busy and jump to 50 to 60%
busy that's gonna hurt.

From everything you've said I really don't think you will be in trouble
here, just better protected!

- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 3/20/2014 12:01 PM, Jeff Crosby wrote:
*This message was transferred with a trial version of CommuniGate(r)
Pro*
But, Jeff, if your run time went from 38 seconds to 5 minutes, would
they
complain?

Is that what's going to happen? Everything will be slower by a factor
of 7
or 8 *times* (38 seconds to 300 seconds)? Seriously, or are you
saying it
for effect?




On Thu, Mar 20, 2014 at 11:44 AM, <rob@xxxxxxxxx> wrote:

Everything is relative. Many of us on this list will go on and on
about
how much of a performance dog a 3 drive raid set is on scsi disks. As
recently stated: PA-THE-TIC! However one poster put on here (some
years
ago) that these people were coming from a S/36 and wouldn't really
know
how fast the machine could go. They would be impressed with what they
got.

But, Jeff, if your run time went from 38 seconds to 5 minutes, would
they
complain?


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From: Jeff Crosby <jlcrosby@xxxxxxxxxxxxxxxx>
To: Midrange Systems Technical Discussion <
midrange-l@xxxxxxxxxxxx>
Date: 03/20/2014 11:03 AM
Subject: Re: RAID 6 rebuild time - can I slash it?
Sent by: midrange-l-bounces@xxxxxxxxxxxx



In various discussions regarding disk and performance and RAID 6 going
back
many months, posters keep hammering away on the performance of RAID
6. The
most important thing to us is protection.

Last October, Dr Franken posted this:
http://archive.midrange.com/midrange-l/201310/msg00754.html

I assume that's still true.

Regarding our performance, we have no performance issues, whatsoever.
I
mean none. No one *ever* complains about response time or how long a
batch
job takes on the System i. Ever.

As an example, we're a food distribution warehouse. The orders to be
delivered today were staged yesterday and then are run all-at-once in
batch. The operator took the option at @4:35pm. This processed every
order against inventory, generated all picking labels, the invoices,
and
all documentation needed by the night crew to load the trucks. This
entire
process took 38 seconds. This is very typical. The actual *print*
time
for those labels and invoices was obviously longer, but the job was
done.

This is followed by running a ton of reports (some hardcopy and some
PDF)
in a follow up batch job. This takes less than 2 minutes.

The above is our most complex and important business hours task, and
it's
less than 3 minutes total. Should I be worried about performance?




On Thu, Mar 20, 2014 at 10:39 AM, Wilson Jonathan
<piercing_male@xxxxxxxxxxx
wrote:

On Thu, 2014-03-20 at 08:33 -0400, rob@xxxxxxxxx wrote:
Sue,

Raid 6 would mean you'd have to lose at least two drives to be wiped
out.
Raid 6 with hot spare means you'd have to lose at least three
drives,
right?

Raid 6 means you can loose 2 working disks, but can continue running,
but if 1 more working disk failed during a re-building spare you're
dead
in the water, unless it was the rebuilding spare in which case you're
still running.


Kind of like, which is better:
Raid 5 with a hot spare, or
Raid 6 with no hot spare
Either way, you'd have to lose two drives to be fried. I would
think
that
Raid 6 would be better if there was a risk the second drive would be
lost
while the hot spare for raid 5 was becoming active and still
rebuilding.

With Raid 5 you lose the space of one drive to striping (spread out
across
all drives). How much do you lose to Raid 6?

You loose 2 drives worth of space to the redundancy, no matter the
layout or number of raid6 disks.

Ideally the parity should be spread across all the disks as apposed
to
specific parity drives, at least as far as linux mdadm is
concerned...
it may be different for IBM i cards/systems but in the linux world
its
much faster to have the parity interleaved with the data as it
prevents
contention on what would become a parity disk(s).


Is there a performance degradation from removing one of your disk
drives
of a 8 drive SCSI raid set to become hot spare?
I would think there would be two performance hits. One, dropping
from
8
drives supporting the raid stripe down to only 4. The other
performance
hit would be one less disk arm assisting while it's just sitting
there
waiting to be hot spare.


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From: Sue Baker <sue.baker@xxxxxxxxxx>
To: midrange-l@xxxxxxxxxxxx
Date: 03/19/2014 06:34 PM
Subject: Re: RAID 6 rebuild time - can I slash it?
Sent by: midrange-l-bounces@xxxxxxxxxxxx



Jeff Crosby <jlcrosby@xxxxxxxxxxxxxxxx> wrote on Wed, 19 Mar
2014 16:29:52 GMT:

On a Saturday of my choosing, our HW service provider is going
to take our System i 520 from 2 8-drive RAID 5 sets to 2
8-drive RAID 6 sets w/hot spare.


Wow! This is quite a change in data protection.

You realize that in order to have your data at risk, you need to
lose 2 drives in the RAID6 array?

Generally, I recommend to customers to do RAID5 + hot spare or
RAID6. I see it as more importante to have as many disk units
as possible for RAID6 versus adding the tiny bit of reduced
delay time for rebuild to begin on a single failed drive that is
provided by hot spare.

Time to create the RAID6 arrays is mostly dependent on the size
of the drives. Other factors are whether there is data on them
or not (% full is irrelevent) and the type of disk controller.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




--
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




--
Jeff Crosby
VP Information Systems
UniPro FoodService/Dilgard
P.O. Box 13369
Ft. Wayne, IN 46868-3369
260-422-7531
www.dilgardfoods.com

The opinions expressed are my own and not necessarily the opinion of
my
company. Unless I say so.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




--
Jeff Crosby
VP Information Systems
UniPro FoodService/Dilgard
P.O. Box 13369
Ft. Wayne, IN 46868-3369
260-422-7531
www.dilgardfoods.com

The opinions expressed are my own and not necessarily the opinion of my
company. Unless I say so.




--
Jeff Crosby
VP Information Systems
UniPro FoodService/Dilgard
P.O. Box 13369
Ft. Wayne, IN 46868-3369
260-422-7531
www.dilgardfoods.com

The opinions expressed are my own and not necessarily the opinion of my
company. Unless I say so.





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.