On Mon, Sep 28, 2009 at 14:49, <rob@xxxxxxxxx> wrote:
What's the smallest drive you ever had to rebuild a raid set on in an i or
it's predecessors? ÂHow long did it take? ÂIf you take a 140gb, divided by
the size of this smaller drive, and multiply the time it took to rebuild
this smaller drive would you get the time it took to rebuild the 140? ÂOr
have they drastically changed the speeds of the newer drives to bring down
the time? ÂDo you think that if IBM came out with a 1TB drive raid time
would be a direct multiple of the time of the 140?
Currently, the biggest drives available for servers are 600GB LFF SAS
drives, at 15kRPM.
These drives have a maximum throughput of about 200MB/s (outer disk),
and a minimum throughput of 120MB/s (inner disk).
Assuming an average throughput of 160MB/s, a rebuild would take a
little of over an hour. However, optimal rebuilds are never the case,
since they're usually interrupted by random seeks.
15kRPM 300GB drives usually have around 100-180MB/s throughput, 150GB
have around 90-160MB/s throughput.
As you can see, the smaller drives are faster relative to their size
than the bigger drives - this has always been the issue with them.
However, do never compare consumer SATA drives with server SAS drives
- most consumer drives are optimized for a minimum of noise and/or
power consumption. There are exceptions to these - server SATA drives
which are meant for nearline storage like archival or backups. Not for
running databases or applications.
On the PC, you can compensate for slow disk drives by buying enough
memory if you're mostly just reading from the database, e.G. when
you're running a CMS or similar, a tactic that is not viable on the
IBM i due to pricing and the transactional workload most users are
running.
However, hard drives are being made obsolete by SSDs - fast. While
they're extremely uncommon in servers right now, many better laptops
ship with SSDs standard. And even cheap SSDs are able to saturate an
old 3 Gigabit SAS/SATA port - on their own, with no RAID involved.
This is why we see 6 Gigabit SAS/SATA being rolled out, which is
necessary for SSDs to display their full read speed.
SSDs also excel at random access, something which even the most
expensive server hard disks are very poor at, needing a large number
of disk arms to compensate for their weaknesses.
As an Amazon Associate we earn from qualifying purchases.