I think the problem with grid computing has always been the data. The examples given are always compute intensive (oil company buys supercomputer grid time to analyze geological data, etc.). In reality, doing normal business work against enterprise databases, it makes no sense at all.

So then IBM started drifting to the "we'll keep extra CPU's sitting in your box and you can rent those out". Which brings up the question, if those CPU's are so cheap you can just throw them in with the box and maybe never even used, then what exactly am I being charged for?

So then it went to virtualization of multiple servers to one server, sort of grid computing taken locally, and that is the buzzword du jour.

Dean, Robert wrote:
"grid computing" was a buzzword, but that doesn't mean it's gone
away. Virtualization and "cloud computing" are the latest buzzwords
that mean pretty much the same thing. My reading is that the main
difference is that instead of virtualizing/distributing work packets,
the goal is to virtualize and distribute entire virtual systems.

It doesn't make sense to go with a grid solution in a single enterprise except for supercomputing applications. The value of
a grid would be for offering grid services to multiple clients, driving fewer boxes at higher levels of utilization (and in this
respect, it is a greener solution).

-----Original Message-----
From: web400-bounces@xxxxxxxxxxxx on behalf of Joe Pluta
Sent: Sat 8/9/2008 3:09 PM
To: Web Enabling the AS400 / iSeries
Subject: Re: [WEB400] What's the latest thinking of the besttwo or threewebdevelopment languages/environ...
Thanks, Ralph. I was going to reply here, but I'm glad you beat me to it.

The idea that having multiple servers somehow magically provides redundancy is pretty silly. Instead, as you point out, it actually provides *multiple* failure points. That is, unless each of those servers also has redundancy, at which point you're beginning to get silly amounts of hardware.

And if those multiple applications are sharing data, that means that either: every request for data goes over the network, or the data is redundant across all the servers. The former is a performance nightmare, the latter is an administration (and cost) overhead.

The server farm model is a pretty poor idea from an enterprise standpoint, except in the case of something like Google, with a single application spread among a gazillion servers. Notice how the term "grid computing" has pretty much disappeared? That's because it's a bad idea for most applications. (Not to mention it's not "green" <grin>).


And if any of multiple servers goes down for same reason, you are in better shape to continue operations because why?

because you have redundancy for multiple servers? why is that easier / better than redundancy for one server?

there is no balance, there is only multiple points of failure, often with a mix of more fragile OS'es when Windows is involved.


Walden H. Leverich wrote:
One the advantages of having an as/400 is that you only need one box.
True. But you need to balance that against the fact that one of the
disadvantages of having an as/400 is you only have one box. Upgrading?
Patching? Crashed? You're entire enterprise is offline. Smart?

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2020 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].