Heya Larry,
I have to chip in again ;)
On Tue, Nov 11, 2008 at 10:54 PM, Larry Bolhuis <lbolhuis@xxxxxxxxxx> wrote:
if you did you missed the entire point. Beyond the disk itself you have
to consider the entire disk storage subsystem. How good are the RAID
cards?
The ServeRAID 8k controllers in the decent System x machines (3550,
3650) are pretty decent. If you want to attach lots of a disks to a
single server (DAS) the ServeRAID 10 series are also quite good.
http://www.redbooks.ibm.com/abstracts/tips0054.html?Open
Do they have large caches to improve disk performance?
They all max out at 256MB cache though - if you need anything larger
than that, a SAN is the way to go in the x World. Smaller SANs aren't
that expensive anymore.
Are those caches redundant to survive failures?
ECC & BBU, same as the smaller POWER machines.
Do they ever flag their batteries as
'too old' and require replacement BEFORE you find out they've failed and
dumped your cache?
Yep, the ServeRAID software (if installed and configured) will notify
you of this. In larger deployments, this is usually automated through
IBM Director (which, i see as POS, but it works well enough).
Do they phone home when a disk unit fails, or errors
are occurring, or a power supply or fan is misbehaving?
Can be configured with IBM Director, yes.
Do they even have
fully redundant power and dual I/O paths.
Redundant power & fans yes, multiple I/O paths require a SAN (just
like the smaller POWER machines).
In all my years I've seen AS/400
iSeries System i machines fail only twice due to disk subsystem failure. I
cannot count the number of reloads required in the Wintel/Lintel space.
You can pay me now or pay me later.
First of all, you can't compare an M15 to a 1000$ 1U server, because
they're meant for different workloads. My comparison to an M15 would
be a System x3650. Without software, they have almost the same price
tag (with the M15 being around 20% more expensive).
They have 1:1 redundant fans, which the M15 does not (N+1). Redundant
power supply, 6x3.5" disk slots or 8x2.5" slots, 4 PCI-E slots (x8),
onboard 256MB BBWC ServeRAID 8k controller, RSA II remote management
(IP KVM, Notifications, Power On/off, etc.), Memory protection with
ECC/ChipKill standard and Memory Mirroring possible, etc. pp.
They're very nice machines and haven't left me down. We have about 50
of those x3650 deployed in the past 2-3 years, and the failure rates
are similar to those we have with the about 40 Power5/Power5+ 520/515
machines - 1 or 2 machines with issues, no data loss.
When you look at the average windoze server the disks are usually below
about 15% full or over 90% full.
All a matter of your deployment strategy, used material, and available budget.
And for those servers where the disk is 90 and up percent full those poor
systems are often there because they cannot add more disk easily. They
hold 5 or 9 or whatever and the slots are all full. Sure you can add an
expansion box of some flavor but that's a bunch of money and of course you
have to put in a reasonable number of disks there to start a new RAID set.
Erm, yes. Can you tell me how exactly this is different from an M15?
And then there is the 'how ya gonna spread your data over that?' question.
Use a SAN. DAS are meant for extremely small scale deployments (like
1-5 servers), not if you have an entire server park. The Windows
architecture is different than those employed in the IBM i world. Not
necessarily better, just different.
And DR is near magical. Get a good Save 21 from time to time and you've
got everything, EVERYTHING.
Except that in the IXA/IXS/iSCSI case, this is not always a correct
and supported way to go. Imaging domain controllers is a big no-no,
same thing goes for SQL Server (which probably is only used in
infrastructure roles in an IBM i shop, but anyway).
Ever try to swap the disks from one wintel server to another? Dang near
NEVER works. Different disk controllers different mother boards different
CPUs. Swaps in the integrated i environment can often be done in seconds!
Really! And you can use the same technology to clone servers for testing
or debugging or whatever.
Use a SAN ;)
Its all about the planning, the design, and the implementation.
Yes. And the best way to achieve this is to let a Windows/x based
solution be designed by people who KNOW that stuff. This is the advice
i give the OP - let this be handled by people who know their Windows
stuff. The current solutions just make the i act as an iSCSI attached
SAN, so most of the planning / design is on the System x side.
And Larry: We have the year 2008 now. Windows is mature and runs many
production systems with great success. It's all about using the proper
equipment and having the right people do the setup. Then it'll work
just as well as a proper IBM i solution implemented by the proper
people.
Problem is just - there are lots of idiots in the Windows field with
barely any competence in the OS they work with or see the only thing
to distinguish themselves from the competition is by lower prices.
This never ended well, and never will.
As an Amazon Associate we earn from qualifying purchases.